Dec 06 06:57:16 crc systemd[1]: Starting Kubernetes Kubelet... Dec 06 06:57:16 crc restorecon[4830]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:16 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:57:17 crc restorecon[4830]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 06 06:57:17 crc kubenswrapper[4895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:57:17 crc kubenswrapper[4895]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 06 06:57:17 crc kubenswrapper[4895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:57:17 crc kubenswrapper[4895]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:57:17 crc kubenswrapper[4895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 06 06:57:17 crc kubenswrapper[4895]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.831108 4895 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835348 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835378 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835387 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835394 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835400 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835406 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835414 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835420 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835427 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835434 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835441 4895 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835448 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835454 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835461 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835467 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835506 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835512 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835518 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835523 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835528 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835533 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835539 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835544 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835549 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835554 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835559 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835564 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835569 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835574 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835579 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835583 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835588 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835593 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835598 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835603 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835608 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835613 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835617 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835622 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835627 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835635 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835641 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835647 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835652 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835658 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835663 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835668 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835672 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835677 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835682 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835696 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835701 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835706 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835712 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835718 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835723 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835728 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835733 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835738 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835742 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835747 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835752 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835757 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835762 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835771 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835777 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835782 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835787 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835792 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835797 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.835801 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835903 4895 flags.go:64] FLAG: --address="0.0.0.0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835914 4895 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835925 4895 flags.go:64] FLAG: --anonymous-auth="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835932 4895 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835941 4895 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835948 4895 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835956 4895 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835965 4895 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835972 4895 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835978 4895 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835984 4895 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835990 4895 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.835996 4895 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836002 4895 flags.go:64] FLAG: --cgroup-root="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836008 4895 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836013 4895 flags.go:64] FLAG: --client-ca-file="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836019 4895 flags.go:64] FLAG: --cloud-config="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836025 4895 flags.go:64] FLAG: --cloud-provider="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836030 4895 flags.go:64] FLAG: --cluster-dns="[]" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836037 4895 flags.go:64] FLAG: --cluster-domain="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836042 4895 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836048 4895 flags.go:64] FLAG: --config-dir="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836054 4895 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836060 4895 flags.go:64] FLAG: --container-log-max-files="5" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836068 4895 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836074 4895 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836080 4895 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836085 4895 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836091 4895 flags.go:64] FLAG: --contention-profiling="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836096 4895 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836102 4895 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836108 4895 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836114 4895 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836123 4895 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836128 4895 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836134 4895 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836140 4895 flags.go:64] FLAG: --enable-load-reader="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836146 4895 flags.go:64] FLAG: --enable-server="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836152 4895 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836159 4895 flags.go:64] FLAG: --event-burst="100" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836165 4895 flags.go:64] FLAG: --event-qps="50" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836171 4895 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836176 4895 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836182 4895 flags.go:64] FLAG: --eviction-hard="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836189 4895 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836195 4895 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836200 4895 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836206 4895 flags.go:64] FLAG: --eviction-soft="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836212 4895 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836217 4895 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836223 4895 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836229 4895 flags.go:64] FLAG: --experimental-mounter-path="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836234 4895 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836240 4895 flags.go:64] FLAG: --fail-swap-on="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836245 4895 flags.go:64] FLAG: --feature-gates="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836252 4895 flags.go:64] FLAG: --file-check-frequency="20s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836258 4895 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836264 4895 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836270 4895 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836276 4895 flags.go:64] FLAG: --healthz-port="10248" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836281 4895 flags.go:64] FLAG: --help="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836287 4895 flags.go:64] FLAG: --hostname-override="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836292 4895 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836298 4895 flags.go:64] FLAG: --http-check-frequency="20s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836304 4895 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836310 4895 flags.go:64] FLAG: --image-credential-provider-config="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836315 4895 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836321 4895 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836328 4895 flags.go:64] FLAG: --image-service-endpoint="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836334 4895 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836339 4895 flags.go:64] FLAG: --kube-api-burst="100" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836345 4895 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836351 4895 flags.go:64] FLAG: --kube-api-qps="50" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836356 4895 flags.go:64] FLAG: --kube-reserved="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836362 4895 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836368 4895 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836373 4895 flags.go:64] FLAG: --kubelet-cgroups="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836379 4895 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836384 4895 flags.go:64] FLAG: --lock-file="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836390 4895 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836396 4895 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836401 4895 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836409 4895 flags.go:64] FLAG: --log-json-split-stream="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836415 4895 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836421 4895 flags.go:64] FLAG: --log-text-split-stream="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836426 4895 flags.go:64] FLAG: --logging-format="text" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836432 4895 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836438 4895 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836444 4895 flags.go:64] FLAG: --manifest-url="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836449 4895 flags.go:64] FLAG: --manifest-url-header="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836457 4895 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836462 4895 flags.go:64] FLAG: --max-open-files="1000000" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836492 4895 flags.go:64] FLAG: --max-pods="110" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836498 4895 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836503 4895 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836509 4895 flags.go:64] FLAG: --memory-manager-policy="None" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836516 4895 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836522 4895 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836528 4895 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836533 4895 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836546 4895 flags.go:64] FLAG: --node-status-max-images="50" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836552 4895 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836557 4895 flags.go:64] FLAG: --oom-score-adj="-999" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836564 4895 flags.go:64] FLAG: --pod-cidr="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836570 4895 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836579 4895 flags.go:64] FLAG: --pod-manifest-path="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836585 4895 flags.go:64] FLAG: --pod-max-pids="-1" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836591 4895 flags.go:64] FLAG: --pods-per-core="0" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836597 4895 flags.go:64] FLAG: --port="10250" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836603 4895 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836610 4895 flags.go:64] FLAG: --provider-id="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836615 4895 flags.go:64] FLAG: --qos-reserved="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836621 4895 flags.go:64] FLAG: --read-only-port="10255" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836626 4895 flags.go:64] FLAG: --register-node="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836632 4895 flags.go:64] FLAG: --register-schedulable="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836639 4895 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836649 4895 flags.go:64] FLAG: --registry-burst="10" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836654 4895 flags.go:64] FLAG: --registry-qps="5" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836660 4895 flags.go:64] FLAG: --reserved-cpus="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836666 4895 flags.go:64] FLAG: --reserved-memory="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836674 4895 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836679 4895 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836685 4895 flags.go:64] FLAG: --rotate-certificates="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836691 4895 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836696 4895 flags.go:64] FLAG: --runonce="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836702 4895 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836708 4895 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836714 4895 flags.go:64] FLAG: --seccomp-default="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836720 4895 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836725 4895 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836731 4895 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836737 4895 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836743 4895 flags.go:64] FLAG: --storage-driver-password="root" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836748 4895 flags.go:64] FLAG: --storage-driver-secure="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836754 4895 flags.go:64] FLAG: --storage-driver-table="stats" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836760 4895 flags.go:64] FLAG: --storage-driver-user="root" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836765 4895 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836771 4895 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836777 4895 flags.go:64] FLAG: --system-cgroups="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836782 4895 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836791 4895 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836796 4895 flags.go:64] FLAG: --tls-cert-file="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836802 4895 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836808 4895 flags.go:64] FLAG: --tls-min-version="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836814 4895 flags.go:64] FLAG: --tls-private-key-file="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836820 4895 flags.go:64] FLAG: --topology-manager-policy="none" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836825 4895 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836831 4895 flags.go:64] FLAG: --topology-manager-scope="container" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836837 4895 flags.go:64] FLAG: --v="2" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836844 4895 flags.go:64] FLAG: --version="false" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836851 4895 flags.go:64] FLAG: --vmodule="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836859 4895 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.836865 4895 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837005 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837012 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837018 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837024 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837029 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837034 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837039 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837044 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837049 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837054 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837059 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837063 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837068 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837073 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837080 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837086 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837092 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837097 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837102 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837108 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837114 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837119 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837124 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837129 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837135 4895 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837140 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837146 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837153 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837158 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837166 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837172 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837178 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837184 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837190 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837197 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837204 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837209 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837214 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837219 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837224 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837230 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837235 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837239 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837244 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837249 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837254 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837259 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837264 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837268 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837273 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837278 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837284 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837289 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837293 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837298 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837303 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837308 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837316 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837321 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837326 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837331 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837342 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837347 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837351 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837356 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837361 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837366 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837372 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837378 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837383 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.837389 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.837581 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.847731 4895 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.847793 4895 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847907 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847921 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847932 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847937 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847943 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847948 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847953 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847957 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847961 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847966 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847980 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847984 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847989 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847994 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.847998 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848002 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848006 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848010 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848014 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848018 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848023 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848027 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848031 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848036 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848040 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848044 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848049 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848054 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848059 4895 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848063 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848067 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848072 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848078 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848084 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848089 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848094 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848100 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848104 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848108 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848113 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848118 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848124 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848130 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848136 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848142 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848146 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848163 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848168 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848172 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848176 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848180 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848183 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848188 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848193 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848198 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848202 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848207 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848212 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848216 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848223 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848229 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848234 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848238 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848242 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848247 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848252 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848256 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848260 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848265 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848269 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848274 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.848283 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848501 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848515 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848521 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848525 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848530 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848536 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848540 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848544 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848549 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848554 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848568 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848572 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848580 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848586 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848591 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848596 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848601 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848605 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848611 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848615 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848619 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848624 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848631 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848636 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848642 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848646 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848650 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848653 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848657 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848662 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848666 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848675 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848682 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848687 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848692 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848697 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848702 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848707 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848712 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848716 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848721 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848725 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848729 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848734 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848740 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848745 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848759 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848765 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848770 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848774 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848780 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848785 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848790 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848795 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848800 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848805 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848811 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848818 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848824 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848830 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848835 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848841 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848846 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848852 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848856 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848861 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848866 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848871 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848876 4895 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848881 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.848886 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.848895 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.849168 4895 server.go:940] "Client rotation is on, will bootstrap in background" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.858551 4895 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.858775 4895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.859776 4895 server.go:997] "Starting client certificate rotation" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.859832 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.860060 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-04 11:11:14.386742556 +0000 UTC Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.860196 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.877570 4895 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.879980 4895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.882204 4895 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.895128 4895 log.go:25] "Validated CRI v1 runtime API" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.915277 4895 log.go:25] "Validated CRI v1 image API" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.917967 4895 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.921515 4895 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-06-06-47-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.921609 4895 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.956197 4895 manager.go:217] Machine: {Timestamp:2025-12-06 06:57:17.95408775 +0000 UTC m=+0.355476650 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:01a4a5d1-647a-48ea-98ed-826c2f6d4911 BootID:d7d2d861-d143-4cb9-9f6f-a839095839a4 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b1:55:a7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b1:55:a7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1e:1a:76 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f1:d9:93 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:54:1e:b5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:16:1b:7e Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ff:4e:01 Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:ff:03:c8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f2:bf:81:e6:2a:33 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:5a:b2:fd:13:31 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.956804 4895 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.956990 4895 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.957520 4895 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.957710 4895 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.957742 4895 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.957977 4895 topology_manager.go:138] "Creating topology manager with none policy" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.957989 4895 container_manager_linux.go:303] "Creating device plugin manager" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.958157 4895 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.958184 4895 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.958425 4895 state_mem.go:36] "Initialized new in-memory state store" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.958529 4895 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.959173 4895 kubelet.go:418] "Attempting to sync node with API server" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.959189 4895 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.959228 4895 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.959241 4895 kubelet.go:324] "Adding apiserver pod source" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.959253 4895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.961445 4895 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.961840 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.961974 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.961854 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.962064 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.962599 4895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.963712 4895 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964410 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964445 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964456 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964467 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964505 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964518 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964530 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964548 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964562 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964573 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964594 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964603 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.964849 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.965425 4895 server.go:1280] "Started kubelet" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.965839 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.965936 4895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.966085 4895 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.966600 4895 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 06 06:57:17 crc systemd[1]: Started Kubernetes Kubelet. Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.968858 4895 server.go:460] "Adding debug handlers to kubelet server" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.969817 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.970074 4895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.970244 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:09:34.899077403 +0000 UTC Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.970544 4895 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.970582 4895 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.970734 4895 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.970761 4895 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 06:57:17 crc kubenswrapper[4895]: W1206 06:57:17.972268 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.972635 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.975904 4895 factory.go:55] Registering systemd factory Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.975947 4895 factory.go:221] Registration of the systemd container factory successfully Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.976304 4895 factory.go:153] Registering CRI-O factory Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.976322 4895 factory.go:221] Registration of the crio container factory successfully Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.976456 4895 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.976505 4895 factory.go:103] Registering Raw factory Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.976545 4895 manager.go:1196] Started watching for new ooms in manager Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.976325 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="200ms" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.977519 4895 manager.go:319] Starting recovery of all containers Dec 06 06:57:17 crc kubenswrapper[4895]: E1206 06:57:17.983003 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e8e00c6fc5ccd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 06:57:17.965384909 +0000 UTC m=+0.366773789,LastTimestamp:2025-12-06 06:57:17.965384909 +0000 UTC m=+0.366773789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992628 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992753 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992819 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992841 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992891 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992938 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.992984 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993005 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993030 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993068 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993118 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993137 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993154 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993175 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993195 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993212 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993227 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993245 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993263 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993279 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993295 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993315 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993337 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993384 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993403 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993427 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993448 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993488 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993508 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993527 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993545 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993563 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993581 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993598 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993617 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993636 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993719 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993767 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993788 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993833 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993855 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993873 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993890 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.993959 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994003 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994025 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994066 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994106 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994125 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994165 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994185 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994206 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994233 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994279 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994324 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994342 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994362 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994444 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994488 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994530 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994551 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994622 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994661 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994680 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994716 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994733 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994752 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994770 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994788 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994806 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994826 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994899 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.994923 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.997117 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.997142 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998846 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998868 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998882 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998897 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998910 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998919 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998932 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998945 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998956 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 06 06:57:17 crc kubenswrapper[4895]: I1206 06:57:17.998968 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999834 4895 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999868 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999881 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999899 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999910 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999923 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999935 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999956 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999969 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999984 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:17.999997 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000016 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000051 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000069 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000084 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000116 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000129 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000142 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000156 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000167 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000204 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000219 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000231 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000246 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000258 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000272 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000284 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000298 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000311 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000323 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000335 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000362 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000373 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000396 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000408 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000439 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000451 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000465 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000496 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000527 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.000539 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001671 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001684 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001697 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001747 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001761 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001773 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001790 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001803 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001820 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001833 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001847 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001859 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001872 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001885 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001899 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001912 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001924 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001945 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001958 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001970 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001982 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.001994 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002006 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002019 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002036 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002049 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002064 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002077 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002091 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002104 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002120 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002151 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002164 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002181 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002194 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002206 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002218 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002231 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002248 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002269 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002293 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002308 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002321 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002333 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002347 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002361 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002381 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002428 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002443 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002460 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002525 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002542 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002557 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002572 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002587 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002610 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002630 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002646 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002662 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002677 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002689 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002702 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002722 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002737 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002750 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002762 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002773 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002785 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002797 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002809 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002825 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002841 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002854 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002868 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002881 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002895 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002908 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.002918 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.003011 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.003032 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.003048 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.003060 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.003072 4895 reconstruct.go:97] "Volume reconstruction finished" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.003082 4895 reconciler.go:26] "Reconciler: start to sync state" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.015183 4895 manager.go:324] Recovery completed Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.032854 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.034788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.034824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.034833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.035598 4895 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.035614 4895 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.035631 4895 state_mem.go:36] "Initialized new in-memory state store" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.044702 4895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.046829 4895 policy_none.go:49] "None policy: Start" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.048622 4895 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.048651 4895 state_mem.go:35] "Initializing new in-memory state store" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.049121 4895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.049209 4895 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.049249 4895 kubelet.go:2335] "Starting kubelet main sync loop" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.049352 4895 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 06 06:57:18 crc kubenswrapper[4895]: W1206 06:57:18.049903 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.049950 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.071603 4895 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.105255 4895 manager.go:334] "Starting Device Plugin manager" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.105366 4895 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.105388 4895 server.go:79] "Starting device plugin registration server" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.106033 4895 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.106065 4895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.106445 4895 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.106628 4895 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.106651 4895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.119614 4895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.149677 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.149822 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.151405 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.151552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.151651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.151975 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.152217 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.152265 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.153235 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.153281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.153294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.153754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.153866 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.153954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.154133 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.154271 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.154309 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.155225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.155249 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.155280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.155689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.155804 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.155882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.156054 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.156247 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.156285 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.156774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.156807 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.156821 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.157070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.157362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.157374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.157735 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.158258 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.158292 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159332 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159556 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.159591 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.160491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.160584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.160597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.177770 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="400ms" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206096 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206187 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206228 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206234 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206335 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206357 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206445 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206515 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.206541 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.207792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.207834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.207845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.207872 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.208538 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.132:6443: connect: connection refused" node="crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308462 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308522 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308545 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308623 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308706 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308703 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308788 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308777 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308850 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308762 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308783 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308871 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308759 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.308728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309041 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309120 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309185 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309249 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309245 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309275 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309293 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309377 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.309444 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.409065 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.410601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.410647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.410660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.410684 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.411220 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.132:6443: connect: connection refused" node="crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.501412 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.517209 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: W1206 06:57:18.524467 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-17beed8aa75f6f5079f9fce2ef7865ca8ec92c0386bfa81fd63182af11d57a4f WatchSource:0}: Error finding container 17beed8aa75f6f5079f9fce2ef7865ca8ec92c0386bfa81fd63182af11d57a4f: Status 404 returned error can't find the container with id 17beed8aa75f6f5079f9fce2ef7865ca8ec92c0386bfa81fd63182af11d57a4f Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.527743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: W1206 06:57:18.535556 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-67970ff73ff75f19f4983a3a6fd757bb44d4cf3c0833ff24996da06fba58550f WatchSource:0}: Error finding container 67970ff73ff75f19f4983a3a6fd757bb44d4cf3c0833ff24996da06fba58550f: Status 404 returned error can't find the container with id 67970ff73ff75f19f4983a3a6fd757bb44d4cf3c0833ff24996da06fba58550f Dec 06 06:57:18 crc kubenswrapper[4895]: W1206 06:57:18.540266 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-46d09c880cff91c7f73a06e688415664a1404f9cba5f539d874662ac0e9edaa9 WatchSource:0}: Error finding container 46d09c880cff91c7f73a06e688415664a1404f9cba5f539d874662ac0e9edaa9: Status 404 returned error can't find the container with id 46d09c880cff91c7f73a06e688415664a1404f9cba5f539d874662ac0e9edaa9 Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.546193 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.550449 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:18 crc kubenswrapper[4895]: W1206 06:57:18.566907 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-40fa0b0880b6bcc1714db48dc7411c93d781ae72bed18353207677f4408acf6b WatchSource:0}: Error finding container 40fa0b0880b6bcc1714db48dc7411c93d781ae72bed18353207677f4408acf6b: Status 404 returned error can't find the container with id 40fa0b0880b6bcc1714db48dc7411c93d781ae72bed18353207677f4408acf6b Dec 06 06:57:18 crc kubenswrapper[4895]: W1206 06:57:18.572342 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f99a5fcee9b5cdbbfa0d227163f5eefac928e0060109616356cc9166cb119df4 WatchSource:0}: Error finding container f99a5fcee9b5cdbbfa0d227163f5eefac928e0060109616356cc9166cb119df4: Status 404 returned error can't find the container with id f99a5fcee9b5cdbbfa0d227163f5eefac928e0060109616356cc9166cb119df4 Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.578893 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="800ms" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.811437 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.815765 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.815823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.815836 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.815867 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:57:18 crc kubenswrapper[4895]: E1206 06:57:18.816227 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.132:6443: connect: connection refused" node="crc" Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.966995 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:18 crc kubenswrapper[4895]: I1206 06:57:18.971033 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:29:34.46377601 +0000 UTC Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.055850 4895 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639" exitCode=0 Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.055942 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.056218 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"67970ff73ff75f19f4983a3a6fd757bb44d4cf3c0833ff24996da06fba58550f"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.056312 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.057826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.057894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.057906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.059089 4895 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867" exitCode=0 Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.059158 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.059207 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"17beed8aa75f6f5079f9fce2ef7865ca8ec92c0386bfa81fd63182af11d57a4f"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.059649 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.060648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.060673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.060680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.060702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f99a5fcee9b5cdbbfa0d227163f5eefac928e0060109616356cc9166cb119df4"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.060686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.062646 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2" exitCode=0 Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.062713 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.062735 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40fa0b0880b6bcc1714db48dc7411c93d781ae72bed18353207677f4408acf6b"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.062841 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.064151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.064188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.064200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.065701 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="325d9a4da8dfed1c9196761de852b033e5e30e08b9364f8e91230dacd9e38bd6" exitCode=0 Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.065755 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"325d9a4da8dfed1c9196761de852b033e5e30e08b9364f8e91230dacd9e38bd6"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.065797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46d09c880cff91c7f73a06e688415664a1404f9cba5f539d874662ac0e9edaa9"} Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.065924 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.066954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.067001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.067012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.068372 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.069365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.069385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.069395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:19 crc kubenswrapper[4895]: W1206 06:57:19.078630 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:19 crc kubenswrapper[4895]: E1206 06:57:19.078912 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:19 crc kubenswrapper[4895]: W1206 06:57:19.298649 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:19 crc kubenswrapper[4895]: E1206 06:57:19.298758 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:19 crc kubenswrapper[4895]: E1206 06:57:19.380001 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="1.6s" Dec 06 06:57:19 crc kubenswrapper[4895]: W1206 06:57:19.409177 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:19 crc kubenswrapper[4895]: E1206 06:57:19.409342 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:19 crc kubenswrapper[4895]: W1206 06:57:19.538799 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:19 crc kubenswrapper[4895]: E1206 06:57:19.538899 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.616390 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.619051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.619100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.619113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.619147 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:57:19 crc kubenswrapper[4895]: E1206 06:57:19.619698 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.132:6443: connect: connection refused" node="crc" Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.967251 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.132:6443: connect: connection refused Dec 06 06:57:19 crc kubenswrapper[4895]: I1206 06:57:19.971340 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:49:39.120538632 +0000 UTC Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.006161 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 06 06:57:20 crc kubenswrapper[4895]: E1206 06:57:20.007435 4895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.132:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.112192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e162f2424f5aace98562e01fdfaf5814324165467190d076a3bc9d4edcbdbfbc"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.112464 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.114131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.114186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.114204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.117387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.117451 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.117500 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.117634 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.120700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.120742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.120754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.126890 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.126945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.126956 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.127076 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.127867 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.127900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.127910 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.133738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.135017 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.135037 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.135069 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.138912 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="29aab4bcd9edb89eaa1f8948b9eefe9dda058f8df8a390dfd810483ec238df88" exitCode=0 Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.138989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"29aab4bcd9edb89eaa1f8948b9eefe9dda058f8df8a390dfd810483ec238df88"} Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.139154 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.140094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.140134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.140143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.292659 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.971596 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:55:38.704276138 +0000 UTC Dec 06 06:57:20 crc kubenswrapper[4895]: I1206 06:57:20.971707 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 426h58m17.732572213s for next certificate rotation Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.147405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479"} Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.147632 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.148913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.148958 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.148972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.150892 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c25e995dd05f11f6532ccefe3caf73171fe986daf46dfa022f96b645f71e5037" exitCode=0 Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.151006 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.150992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c25e995dd05f11f6532ccefe3caf73171fe986daf46dfa022f96b645f71e5037"} Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.151221 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.151902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.151934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.151944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.152094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.152124 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.152135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.220383 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.222155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.222201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.222213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:21 crc kubenswrapper[4895]: I1206 06:57:21.222247 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.156910 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c1ed547ff9f85c0e6a122f893e6336b9063767c3505abbf6c71908abb004882"} Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.156983 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e216bfcdfb9b399d962c38823a40d4241971f091acb202760cc988884b0f9b80"} Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.157003 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d232e0a4d8cb2ec0eb59e404552737943a7db9341f21e9189673eab3ab98847"} Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.157017 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"872c7ea63fb864424ff0ec1d2e2094117808dfebbfdbab862bb16f595b14446d"} Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.157061 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.157084 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.157159 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.158270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.158308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.158335 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.159110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.159147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:22 crc kubenswrapper[4895]: I1206 06:57:22.159158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.164774 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.165190 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.165227 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.165659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10cd7dd956ad8c4e4335c1ff7b3bc563fcb1533d4068f33cd9e276820a900e89"} Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.166432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.166499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.166511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.166512 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.166547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.166563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.220716 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.293521 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:57:23 crc kubenswrapper[4895]: I1206 06:57:23.293615 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.168030 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.168119 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.169370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.169418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.169431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.169965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.170034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.170053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.396115 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.897049 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.897377 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.898794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.898841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:24 crc kubenswrapper[4895]: I1206 06:57:24.898852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:25 crc kubenswrapper[4895]: I1206 06:57:25.170030 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:25 crc kubenswrapper[4895]: I1206 06:57:25.171409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:25 crc kubenswrapper[4895]: I1206 06:57:25.171466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:25 crc kubenswrapper[4895]: I1206 06:57:25.171500 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:27 crc kubenswrapper[4895]: I1206 06:57:27.560880 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 06 06:57:27 crc kubenswrapper[4895]: I1206 06:57:27.561201 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:27 crc kubenswrapper[4895]: I1206 06:57:27.563325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:27 crc kubenswrapper[4895]: I1206 06:57:27.563397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:27 crc kubenswrapper[4895]: I1206 06:57:27.563410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:28 crc kubenswrapper[4895]: E1206 06:57:28.120634 4895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.153832 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.154055 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.155668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.155733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.155748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.889363 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.889606 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.890946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.891004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.891017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:28 crc kubenswrapper[4895]: I1206 06:57:28.895555 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.182187 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.183631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.183674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.183692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.191280 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.727175 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.727406 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.729131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.729261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:29 crc kubenswrapper[4895]: I1206 06:57:29.729357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.188544 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.189642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.189679 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.189689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.968436 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 06 06:57:30 crc kubenswrapper[4895]: E1206 06:57:30.980890 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.991718 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.991834 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.997592 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 06 06:57:30 crc kubenswrapper[4895]: I1206 06:57:30.997692 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.042121 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.042312 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.043601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.043640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.043654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.072569 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.195794 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.197147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.197284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.197446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.207789 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.227868 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.228072 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.229507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.229558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.229568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.257311 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.293964 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:57:33 crc kubenswrapper[4895]: I1206 06:57:33.294062 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.198197 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.198369 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.199211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.199247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.199259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.199907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.199967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:34 crc kubenswrapper[4895]: I1206 06:57:34.199987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.993105 4895 trace.go:236] Trace[280323459]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:57:21.864) (total time: 14128ms): Dec 06 06:57:35 crc kubenswrapper[4895]: Trace[280323459]: ---"Objects listed" error: 14128ms (06:57:35.993) Dec 06 06:57:35 crc kubenswrapper[4895]: Trace[280323459]: [14.128908079s] [14.128908079s] END Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.993135 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.994449 4895 trace.go:236] Trace[1338835546]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:57:22.120) (total time: 13873ms): Dec 06 06:57:35 crc kubenswrapper[4895]: Trace[1338835546]: ---"Objects listed" error: 13873ms (06:57:35.994) Dec 06 06:57:35 crc kubenswrapper[4895]: Trace[1338835546]: [13.873940748s] [13.873940748s] END Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.994511 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.995063 4895 trace.go:236] Trace[680406133]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:57:22.037) (total time: 13957ms): Dec 06 06:57:35 crc kubenswrapper[4895]: Trace[680406133]: ---"Objects listed" error: 13957ms (06:57:35.994) Dec 06 06:57:35 crc kubenswrapper[4895]: Trace[680406133]: [13.957320275s] [13.957320275s] END Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.995134 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 06:57:35 crc kubenswrapper[4895]: I1206 06:57:35.995889 4895 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 06 06:57:35 crc kubenswrapper[4895]: E1206 06:57:35.996035 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.004627 4895 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.022567 4895 trace.go:236] Trace[220001234]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:57:21.738) (total time: 14283ms): Dec 06 06:57:36 crc kubenswrapper[4895]: Trace[220001234]: ---"Objects listed" error: 14283ms (06:57:36.022) Dec 06 06:57:36 crc kubenswrapper[4895]: Trace[220001234]: [14.283984494s] [14.283984494s] END Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.022614 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.443028 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42488->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.443110 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42488->192.168.126.11:17697: read: connection reset by peer" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.443563 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.443646 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.599998 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.600075 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.972064 4895 apiserver.go:52] "Watching apiserver" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.975181 4895 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.975462 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.975895 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.976029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.976115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:36 crc kubenswrapper[4895]: E1206 06:57:36.976148 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.976180 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:36 crc kubenswrapper[4895]: E1206 06:57:36.976394 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.976871 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.976901 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:36 crc kubenswrapper[4895]: E1206 06:57:36.976918 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979151 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979164 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979815 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979190 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979194 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979229 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979273 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979286 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 06:57:36 crc kubenswrapper[4895]: I1206 06:57:36.979572 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.007643 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.020128 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.032560 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.043994 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.055855 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.067521 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.071683 4895 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.080068 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102270 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102294 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102317 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102340 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102366 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102389 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102415 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102453 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102499 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102522 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102612 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102629 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102687 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102714 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102739 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102657 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102784 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102807 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102686 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102820 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102759 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102875 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102998 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103026 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103085 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103188 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103602 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103603 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103653 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103688 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103763 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.102825 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103820 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103837 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103873 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103906 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103925 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103943 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103963 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103980 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104014 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104029 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104064 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104081 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104119 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104138 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104157 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104175 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104199 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104217 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104234 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104274 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104297 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104315 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104332 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104350 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104369 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104387 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104403 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104421 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104494 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104534 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104550 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104568 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104603 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104701 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104718 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104737 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104754 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104769 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104787 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104822 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104841 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104860 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104877 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104895 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104911 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104926 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104942 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104958 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104975 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104990 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.103920 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104077 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104125 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104228 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104250 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104269 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104341 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104355 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104413 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104558 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104604 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104613 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105192 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104703 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104824 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104834 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104904 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.104996 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105128 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105264 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105382 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105505 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105520 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105008 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105559 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105595 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105611 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105626 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105645 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105685 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105701 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105718 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105751 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105765 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105805 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105824 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105843 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105878 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105894 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105910 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105942 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105958 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105973 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105989 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106005 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106024 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106039 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106060 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106074 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106090 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106106 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106122 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106137 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106153 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106185 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106200 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106235 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106322 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106344 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106362 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106380 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106395 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106411 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106427 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106443 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106458 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107494 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107525 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107679 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107700 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107716 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107738 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107764 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107783 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107802 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107821 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107840 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107875 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107890 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107906 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107925 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107943 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107960 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108017 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108036 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108053 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108070 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108089 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108107 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108124 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108142 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108159 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108176 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108193 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108212 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108233 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108252 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108312 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108330 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108349 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108367 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108391 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108409 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108457 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108499 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108520 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108536 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108554 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108570 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108589 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108606 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108624 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109006 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109026 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109043 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109061 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109079 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109116 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109166 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109189 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109237 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109254 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109300 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109319 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109356 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109377 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109417 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109435 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109518 4895 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109539 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109551 4895 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109563 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109573 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109582 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109592 4895 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109602 4895 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109611 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109621 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109631 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109641 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109651 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109660 4895 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109670 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109680 4895 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109690 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109701 4895 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109711 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109721 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109731 4895 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109743 4895 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109756 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109768 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109780 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109793 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109807 4895 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109819 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109838 4895 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109850 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109861 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109871 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109880 4895 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109889 4895 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109897 4895 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109909 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109918 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109927 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109937 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109945 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109955 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113210 4895 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105556 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.105647 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106034 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106489 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.106918 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.107982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108327 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108399 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.108644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109321 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109452 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109608 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109682 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.109688 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110122 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110252 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110435 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110496 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110526 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110751 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.110545 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.111040 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.111193 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.111544 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.111561 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.111858 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.111982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112048 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112208 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112269 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112352 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112674 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112744 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112865 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112870 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.112987 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113202 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113226 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113300 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113517 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113540 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113560 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113722 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.113735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.116072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.116555 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.117491 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.117581 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:37.617522882 +0000 UTC m=+20.018911752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.117602 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.117599 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.117419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.118052 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.118326 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.116674 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.118652 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.118878 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.118971 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.118992 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.119006 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.119092 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.119258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.119650 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:37.619607019 +0000 UTC m=+20.020995889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.119147 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.120382 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.121801 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.120610 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.120644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.120808 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.121015 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.121025 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.121465 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.121550 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.121580 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.122040 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.122577 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.123343 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.131785 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.131910 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.132057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.132763 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.133094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.133136 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.133171 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.133254 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.133345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.122580 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.134354 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.134705 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.134872 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.134913 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.135243 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.135442 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.136107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.136585 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.136658 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.136757 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.137132 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.137686 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.137824 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.137963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.138140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.133319 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.138363 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.138635 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.116592 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.138790 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.138817 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.139140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.139522 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.139600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.142948 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.143801 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.144105 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.144529 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.144602 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.144657 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.144934 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.144620 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.145304 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.145350 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.145448 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:57:37.64542344 +0000 UTC m=+20.046812310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.145668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.145756 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.146245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.146401 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.146852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147132 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147146 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147272 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147337 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147369 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147505 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147728 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.147915 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.148134 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.148245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.148431 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.148441 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.148600 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.149206 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.149450 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.149852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.150066 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.150072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.139405 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.150124 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.150160 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.151410 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.159309 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.159779 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.159828 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.159848 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.159938 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:37.659909683 +0000 UTC m=+20.061298633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.160810 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.163122 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.177905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.182695 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.182944 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.183025 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.183156 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:37.683127364 +0000 UTC m=+20.084516224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.187882 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.191440 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.206809 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.209106 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210332 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210395 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210505 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210519 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210529 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210540 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210549 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210559 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210568 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210579 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210589 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210598 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210608 4895 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210617 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210625 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210635 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210646 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210654 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210663 4895 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210673 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210681 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210692 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210703 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210713 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210722 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210730 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210738 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210747 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210755 4895 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210764 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210773 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210781 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210791 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210800 4895 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210808 4895 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210816 4895 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210825 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210833 4895 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210841 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210849 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210857 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210866 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210874 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210882 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210890 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210899 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210892 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210908 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210948 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210957 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.210999 4895 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211014 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211022 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211030 4895 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211039 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211048 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211056 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211065 4895 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211073 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211082 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211091 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211100 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211086 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479" exitCode=255 Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211109 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211137 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211151 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479"} Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211165 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211178 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211191 4895 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211203 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211215 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211227 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211239 4895 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211251 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211262 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211274 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211286 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211298 4895 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211310 4895 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211322 4895 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211332 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211344 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211355 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211367 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211378 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211389 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211403 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211414 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211426 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211439 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211450 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211461 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211494 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211507 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211519 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211531 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211542 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211553 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211566 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211578 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211589 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211602 4895 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211615 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211626 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211638 4895 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211650 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211662 4895 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211676 4895 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211686 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211698 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211710 4895 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211721 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211731 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211742 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211753 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211765 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211776 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211788 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211800 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211812 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211823 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211837 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211849 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211860 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211870 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211882 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211893 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211904 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211917 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211929 4895 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211942 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211955 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211970 4895 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211982 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.211994 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212007 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212019 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212030 4895 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212041 4895 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212052 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212064 4895 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212074 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212085 4895 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212097 4895 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212109 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212121 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212133 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212144 4895 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212155 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212165 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212195 4895 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212207 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212219 4895 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212230 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212241 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.212252 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.225293 4895 scope.go:117] "RemoveContainer" containerID="672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.229190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.230754 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.238787 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.239760 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.256058 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.290610 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.294830 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.300654 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.308653 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.318384 4895 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.318417 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.337794 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.426870 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.450913 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.494684 4895 csr.go:261] certificate signing request csr-kb5j9 is approved, waiting to be issued Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.589534 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vtdvq"] Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.589964 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.602241 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.602955 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.603090 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.604536 4895 csr.go:257] certificate signing request csr-kb5j9 is issued Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.615936 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.624960 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.625010 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.625118 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.625165 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:38.625150763 +0000 UTC m=+21.026539633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.625427 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.625497 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:38.625465271 +0000 UTC m=+21.026854141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.636532 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.665313 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.705299 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hdgqw"] Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.705715 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.709592 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.710223 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.710346 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.711198 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.715771 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.725599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.725688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725748 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:57:38.725708194 +0000 UTC m=+21.127097064 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.725808 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725858 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725887 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725901 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725949 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725962 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:38.725940361 +0000 UTC m=+21.127329401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725964 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.725977 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: E1206 06:57:37.726007 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:38.725996793 +0000 UTC m=+21.127385663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.725866 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d2795a3-ed84-4f0b-828b-251d2e503864-hosts-file\") pod \"node-resolver-vtdvq\" (UID: \"2d2795a3-ed84-4f0b-828b-251d2e503864\") " pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.726041 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gn9\" (UniqueName: \"kubernetes.io/projected/2d2795a3-ed84-4f0b-828b-251d2e503864-kube-api-access-m4gn9\") pod \"node-resolver-vtdvq\" (UID: \"2d2795a3-ed84-4f0b-828b-251d2e503864\") " pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.749626 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.771389 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.795188 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.819321 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.827301 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d2795a3-ed84-4f0b-828b-251d2e503864-hosts-file\") pod \"node-resolver-vtdvq\" (UID: \"2d2795a3-ed84-4f0b-828b-251d2e503864\") " pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.827342 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gn9\" (UniqueName: \"kubernetes.io/projected/2d2795a3-ed84-4f0b-828b-251d2e503864-kube-api-access-m4gn9\") pod \"node-resolver-vtdvq\" (UID: \"2d2795a3-ed84-4f0b-828b-251d2e503864\") " pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.827387 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/138ef400-4714-4742-ae94-ea6b8afd73d1-host\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.827403 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wz4\" (UniqueName: \"kubernetes.io/projected/138ef400-4714-4742-ae94-ea6b8afd73d1-kube-api-access-82wz4\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.827422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/138ef400-4714-4742-ae94-ea6b8afd73d1-serviceca\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.827533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d2795a3-ed84-4f0b-828b-251d2e503864-hosts-file\") pod \"node-resolver-vtdvq\" (UID: \"2d2795a3-ed84-4f0b-828b-251d2e503864\") " pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.846187 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gn9\" (UniqueName: \"kubernetes.io/projected/2d2795a3-ed84-4f0b-828b-251d2e503864-kube-api-access-m4gn9\") pod \"node-resolver-vtdvq\" (UID: \"2d2795a3-ed84-4f0b-828b-251d2e503864\") " pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.861861 4895 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.862859 4895 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.862919 4895 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.862978 4895 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863009 4895 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863050 4895 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863100 4895 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.863319 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c/status\": read tcp 38.129.56.132:36000->38.129.56.132:6443: use of closed network connection" Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863558 4895 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863593 4895 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863621 4895 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.863661 4895 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.864070 4895 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.864100 4895 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.864121 4895 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.864147 4895 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.864509 4895 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: W1206 06:57:37.864543 4895 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.902131 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vtdvq" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.916256 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.928446 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/138ef400-4714-4742-ae94-ea6b8afd73d1-host\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.928528 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82wz4\" (UniqueName: \"kubernetes.io/projected/138ef400-4714-4742-ae94-ea6b8afd73d1-kube-api-access-82wz4\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.928553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/138ef400-4714-4742-ae94-ea6b8afd73d1-serviceca\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.929583 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/138ef400-4714-4742-ae94-ea6b8afd73d1-serviceca\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.929648 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/138ef400-4714-4742-ae94-ea6b8afd73d1-host\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.957959 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.975661 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82wz4\" (UniqueName: \"kubernetes.io/projected/138ef400-4714-4742-ae94-ea6b8afd73d1-kube-api-access-82wz4\") pod \"node-ca-hdgqw\" (UID: \"138ef400-4714-4742-ae94-ea6b8afd73d1\") " pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:37 crc kubenswrapper[4895]: I1206 06:57:37.993833 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.026321 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hdgqw" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.042717 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.052605 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.052762 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.055095 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.056403 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.057234 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.058381 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.059035 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.060001 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.060768 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.061431 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.062529 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.063109 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.064674 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.067041 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.067894 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.068452 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.069510 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.070018 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.071207 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.071679 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.072329 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.078145 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.079334 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.079890 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.080979 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.081426 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: W1206 06:57:38.081778 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod138ef400_4714_4742_ae94_ea6b8afd73d1.slice/crio-0cab13b039a86b4a6ef13b38a860b76790f5130cfe8dcffee19db2251ae193e6 WatchSource:0}: Error finding container 0cab13b039a86b4a6ef13b38a860b76790f5130cfe8dcffee19db2251ae193e6: Status 404 returned error can't find the container with id 0cab13b039a86b4a6ef13b38a860b76790f5130cfe8dcffee19db2251ae193e6 Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.082641 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.083168 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.083837 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.087410 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.089406 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.090139 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.093542 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.094421 4895 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.094827 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.099341 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.100369 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.100882 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.103262 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.106977 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.107600 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.109901 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.109944 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.110991 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.111469 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.112124 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.112841 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.115047 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.115702 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.116884 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.117906 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.119040 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.119894 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.121713 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.124581 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.125978 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.127146 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.127956 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.129160 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.144126 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.163123 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.177853 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.197980 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.214252 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.237371 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.243313 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.262685 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.263463 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.269013 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hdgqw" event={"ID":"138ef400-4714-4742-ae94-ea6b8afd73d1","Type":"ContainerStarted","Data":"0cab13b039a86b4a6ef13b38a860b76790f5130cfe8dcffee19db2251ae193e6"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.269060 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.271340 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"661756c4bb4bd8e5491f58bc9bd65aa75485f74c7f4c2725336304f996a14143"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.283918 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.284577 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-k86k4"] Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.285123 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6k7r2"] Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.285370 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lgpv5"] Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.286223 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.287425 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.288074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.288121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"09ea9aab952006d503d175cd3ff90c82dcbfc5952a030295593d13b303785fb7"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.288549 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.291810 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.291836 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292216 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292365 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292424 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292485 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292372 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292751 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292905 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292936 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.292974 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.293101 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.295873 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vtdvq" event={"ID":"2d2795a3-ed84-4f0b-828b-251d2e503864","Type":"ContainerStarted","Data":"d876d8f34c4c4c5abd2ffb9fdde339d2455ebdf55b28c5388a98a887ce84b6bc"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.299528 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.302512 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.302562 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.302572 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"690127a4e603edb04baf036b55c08db601cf337400d39b332a9560649c31ba34"} Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.312904 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.324594 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-hostroot\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332738 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-conf-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332772 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f272\" (UniqueName: \"kubernetes.io/projected/9200f6d1-bc88-4065-9985-8c6a6387404f-kube-api-access-2f272\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332816 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjtw\" (UniqueName: \"kubernetes.io/projected/e1f42fc6-54ce-4f49-adbd-545e02a1f322-kube-api-access-5zjtw\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bss5t\" (UniqueName: \"kubernetes.io/projected/9b85a3f9-a505-4331-a3e2-08a6211defcf-kube-api-access-bss5t\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332867 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-cnibin\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332884 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-k8s-cni-cncf-io\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332899 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-system-cni-dir\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332922 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-kubelet\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-etc-kubernetes\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.332974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9200f6d1-bc88-4065-9985-8c6a6387404f-proxy-tls\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333003 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-socket-dir-parent\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-os-release\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333059 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-cni-bin\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333077 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9200f6d1-bc88-4065-9985-8c6a6387404f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333115 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-daemon-config\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333140 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-multus-certs\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333164 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b85a3f9-a505-4331-a3e2-08a6211defcf-cni-binary-copy\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-cnibin\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-netns\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333226 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-system-cni-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9200f6d1-bc88-4065-9985-8c6a6387404f-rootfs\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333260 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b85a3f9-a505-4331-a3e2-08a6211defcf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333295 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-cni-multus\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333310 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-os-release\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333367 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-cni-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.333398 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e1f42fc6-54ce-4f49-adbd-545e02a1f322-cni-binary-copy\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.342143 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.360872 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.377229 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.391120 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.407425 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.428675 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434037 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-daemon-config\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-multus-certs\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434610 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-multus-certs\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434631 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b85a3f9-a505-4331-a3e2-08a6211defcf-cni-binary-copy\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-cnibin\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434813 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-netns\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434836 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b85a3f9-a505-4331-a3e2-08a6211defcf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434876 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-system-cni-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434895 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9200f6d1-bc88-4065-9985-8c6a6387404f-rootfs\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434913 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-os-release\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.434975 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-cni-multus\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-cni-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e1f42fc6-54ce-4f49-adbd-545e02a1f322-cni-binary-copy\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-hostroot\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-conf-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-daemon-config\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435076 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjtw\" (UniqueName: \"kubernetes.io/projected/e1f42fc6-54ce-4f49-adbd-545e02a1f322-kube-api-access-5zjtw\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f272\" (UniqueName: \"kubernetes.io/projected/9200f6d1-bc88-4065-9985-8c6a6387404f-kube-api-access-2f272\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435221 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bss5t\" (UniqueName: \"kubernetes.io/projected/9b85a3f9-a505-4331-a3e2-08a6211defcf-kube-api-access-bss5t\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-cnibin\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435282 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-k8s-cni-cncf-io\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-system-cni-dir\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-kubelet\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435368 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-etc-kubernetes\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435405 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9200f6d1-bc88-4065-9985-8c6a6387404f-proxy-tls\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435433 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-socket-dir-parent\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-cni-bin\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435501 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9200f6d1-bc88-4065-9985-8c6a6387404f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435540 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-os-release\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.435938 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-os-release\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436004 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-netns\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436012 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-cnibin\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436285 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-cni-multus\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-cni-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436908 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-cnibin\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436956 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-run-k8s-cni-cncf-io\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.436992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-system-cni-dir\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e1f42fc6-54ce-4f49-adbd-545e02a1f322-cni-binary-copy\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437027 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-kubelet\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437047 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-hostroot\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437065 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-etc-kubernetes\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437075 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-conf-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-host-var-lib-cni-bin\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-multus-socket-dir-parent\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437187 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9200f6d1-bc88-4065-9985-8c6a6387404f-rootfs\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1f42fc6-54ce-4f49-adbd-545e02a1f322-system-cni-dir\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b85a3f9-a505-4331-a3e2-08a6211defcf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.437603 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b85a3f9-a505-4331-a3e2-08a6211defcf-os-release\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.438222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9200f6d1-bc88-4065-9985-8c6a6387404f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.438879 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b85a3f9-a505-4331-a3e2-08a6211defcf-cni-binary-copy\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.445857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9200f6d1-bc88-4065-9985-8c6a6387404f-proxy-tls\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.457206 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.463235 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f272\" (UniqueName: \"kubernetes.io/projected/9200f6d1-bc88-4065-9985-8c6a6387404f-kube-api-access-2f272\") pod \"machine-config-daemon-6k7r2\" (UID: \"9200f6d1-bc88-4065-9985-8c6a6387404f\") " pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.473863 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjtw\" (UniqueName: \"kubernetes.io/projected/e1f42fc6-54ce-4f49-adbd-545e02a1f322-kube-api-access-5zjtw\") pod \"multus-k86k4\" (UID: \"e1f42fc6-54ce-4f49-adbd-545e02a1f322\") " pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.477119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bss5t\" (UniqueName: \"kubernetes.io/projected/9b85a3f9-a505-4331-a3e2-08a6211defcf-kube-api-access-bss5t\") pod \"multus-additional-cni-plugins-lgpv5\" (UID: \"9b85a3f9-a505-4331-a3e2-08a6211defcf\") " pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.490649 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.529384 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.579797 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.605574 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-06 06:52:37 +0000 UTC, rotation deadline is 2026-09-13 16:36:05.063194632 +0000 UTC Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.605650 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6753h38m26.457546953s for next certificate rotation Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.605803 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-k86k4" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.622260 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.633306 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.639634 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.644075 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.644118 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.644229 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.644280 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.644325 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:40.64429979 +0000 UTC m=+23.045688840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.644360 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:40.644348241 +0000 UTC m=+23.045737311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.678815 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.693800 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.744614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.744739 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.744827 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:57:40.744787619 +0000 UTC m=+23.146176489 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.744906 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.744925 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.744939 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.744933 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.745000 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:40.744981504 +0000 UTC m=+23.146370574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.745106 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.745130 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.745144 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:38 crc kubenswrapper[4895]: E1206 06:57:38.745181 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:40.7451692 +0000 UTC m=+23.146558270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.748248 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mhcxk"] Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.749397 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.763956 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.764187 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.764306 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.769203 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.769421 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.769565 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.769783 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.843557 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.845787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-log-socket\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.845832 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-var-lib-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.845862 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkc5g\" (UniqueName: \"kubernetes.io/projected/c9690808-de36-4960-8286-7079c78c491b-kube-api-access-lkc5g\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.845887 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-systemd\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.845911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-systemd-units\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.845932 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-netns\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846130 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-bin\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846201 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-etc-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846263 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-ovn\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846290 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-netd\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-node-log\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846362 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9690808-de36-4960-8286-7079c78c491b-ovn-node-metrics-cert\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-script-lib\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-env-overrides\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846543 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-kubelet\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846564 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846614 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-config\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.846680 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-slash\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.855572 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.873864 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.890121 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.902863 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.921022 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.925804 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.943243 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947606 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-var-lib-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947648 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkc5g\" (UniqueName: \"kubernetes.io/projected/c9690808-de36-4960-8286-7079c78c491b-kube-api-access-lkc5g\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947667 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-systemd\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947684 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-systemd-units\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947701 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-netns\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-bin\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947748 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-etc-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-ovn\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947788 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-netd\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-node-log\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947823 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9690808-de36-4960-8286-7079c78c491b-ovn-node-metrics-cert\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-script-lib\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947871 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947890 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-env-overrides\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-kubelet\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947933 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-netns\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947973 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-config\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.947999 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-systemd\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948010 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-slash\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948028 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-systemd-units\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-log-socket\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-var-lib-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948106 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-log-socket\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-ovn\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948699 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-bin\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-etc-openvswitch\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948842 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-node-log\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948873 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-kubelet\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-env-overrides\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948894 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-slash\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948904 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948842 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-netd\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.948939 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-script-lib\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.949697 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-config\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.957290 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9690808-de36-4960-8286-7079c78c491b-ovn-node-metrics-cert\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.969193 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.975994 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkc5g\" (UniqueName: \"kubernetes.io/projected/c9690808-de36-4960-8286-7079c78c491b-kube-api-access-lkc5g\") pod \"ovnkube-node-mhcxk\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.990891 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:38 crc kubenswrapper[4895]: I1206 06:57:38.992418 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.015734 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.035084 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.049672 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.049695 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.049828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.049951 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.052823 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.065560 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.069990 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.079848 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: W1206 06:57:39.082285 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9690808_de36_4960_8286_7079c78c491b.slice/crio-c5a7c33f5aec2194e7988992c181ac9907aad630ffc997958c4ca923a372d11b WatchSource:0}: Error finding container c5a7c33f5aec2194e7988992c181ac9907aad630ffc997958c4ca923a372d11b: Status 404 returned error can't find the container with id c5a7c33f5aec2194e7988992c181ac9907aad630ffc997958c4ca923a372d11b Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.087042 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.099373 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.113036 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.127918 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.131502 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.194398 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.197916 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.199753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.199806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.199822 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.199987 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.211713 4895 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.212142 4895 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.213525 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.213550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.213558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.213573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.213583 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.239817 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.243671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.243705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.243715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.243735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.243746 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.246405 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.256643 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.256809 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.264666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.264703 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.264714 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.264744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.264755 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.280963 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.288215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.288257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.288272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.288294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.288309 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.304314 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.306757 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b85a3f9-a505-4331-a3e2-08a6211defcf" containerID="6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3" exitCode=0 Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.306848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerDied","Data":"6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.306897 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerStarted","Data":"9d4a1c26e9a5ed9fc280408d75081bf87cde0aed3b5f315c8a23b2182490976c"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.308974 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309048 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309064 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"e49428d9874e590a478aa7d636609c253649a780824326697e5b710286b409d1"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.309286 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.310195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerStarted","Data":"9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.310223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerStarted","Data":"2c1a77c22820af4285ddc565ef54a2d94e8fe1f1dc874e0822b689ba4430d967"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.314437 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hdgqw" event={"ID":"138ef400-4714-4742-ae94-ea6b8afd73d1","Type":"ContainerStarted","Data":"408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.315238 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.320498 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vtdvq" event={"ID":"2d2795a3-ed84-4f0b-828b-251d2e503864","Type":"ContainerStarted","Data":"8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.321951 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"c5a7c33f5aec2194e7988992c181ac9907aad630ffc997958c4ca923a372d11b"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.331383 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.343526 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: E1206 06:57:39.343662 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.343904 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.346270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.346323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.346337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.346353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.346363 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.366429 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.392034 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.404146 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.431318 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.461824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.461894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.461909 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.461931 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.461952 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.464879 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.512201 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.530458 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.547818 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.565292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.565343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.565354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.565375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.565389 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.575076 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.588568 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.606395 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.621805 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.639858 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.655009 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.667539 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.669675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.669706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.669719 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.669739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.669753 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.685841 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.710652 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.725312 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.755865 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.785542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.785609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.785622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.785645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.785663 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.792821 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.814751 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.832931 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.851256 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.880994 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.888419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.888466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.888494 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.888515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.888530 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.908446 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.942461 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.991308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.991372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.991383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.991402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:39 crc kubenswrapper[4895]: I1206 06:57:39.991415 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:39Z","lastTransitionTime":"2025-12-06T06:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.050024 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.050207 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.094790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.094862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.094882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.094905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.094921 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.198194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.198254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.198263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.198283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.198293 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.296711 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.300836 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.300868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.300881 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.300907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.300920 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.302464 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.308506 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.319294 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.327160 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerStarted","Data":"8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.328502 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" exitCode=0 Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.328649 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.343492 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.343775 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.366083 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.385221 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.404716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.404777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.404794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.404817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.404833 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.405288 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.429288 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.512006 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.512061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.512074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.512099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.512113 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.528851 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.584057 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.603918 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.614567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.614611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.614623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.614647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.614660 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.637698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.652169 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.670889 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.670939 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.671095 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.671173 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:44.671154291 +0000 UTC m=+27.072543161 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.671291 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.671558 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.671632 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:44.671610724 +0000 UTC m=+27.072999594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.688438 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.704023 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.718172 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.718450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.718528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.718546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.718567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.718582 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.746494 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.764510 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.772373 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.772570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.772600 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772639 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:57:44.772598338 +0000 UTC m=+27.173987218 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772753 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772775 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772788 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772822 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772854 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772881 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772858 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:44.772837524 +0000 UTC m=+27.174226394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:40 crc kubenswrapper[4895]: E1206 06:57:40.772952 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:44.772938877 +0000 UTC m=+27.174327927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.785370 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.812193 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.821410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.821455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.821508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.821531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.821546 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.830820 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.854421 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.878618 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.905742 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.924149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.924197 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.924211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.924230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.924243 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:40Z","lastTransitionTime":"2025-12-06T06:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.940124 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:40 crc kubenswrapper[4895]: I1206 06:57:40.979796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:40Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.025805 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.027855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.027915 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.027926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.027944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.027956 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.049797 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.049844 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:41 crc kubenswrapper[4895]: E1206 06:57:41.049950 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:41 crc kubenswrapper[4895]: E1206 06:57:41.050158 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.076585 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.131039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.131090 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.131106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.131124 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.131136 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.234153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.234203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.234216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.234235 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.234248 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.336387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.336423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.336433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.336450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.336460 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.341562 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.344190 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b85a3f9-a505-4331-a3e2-08a6211defcf" containerID="8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f" exitCode=0 Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.344261 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerDied","Data":"8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.350349 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.350405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.350418 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.350429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.350438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.350449 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.362691 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.376547 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.389626 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.402670 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.422712 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.438215 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.441642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.441695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.441708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.441729 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.441743 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.453639 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.473638 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.486768 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.502109 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.518899 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.541369 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.545700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.545755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.545764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.545785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.545806 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.580039 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.627558 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.649933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.649985 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.649995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.650013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.650024 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.658118 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.705218 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.746066 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.753533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.753923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.754011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.754092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.754161 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.784064 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.819811 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.856879 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.856926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.856936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.856954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.856968 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.861170 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.902387 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.944271 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.960416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.960495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.960517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.960563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.960577 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:41Z","lastTransitionTime":"2025-12-06T06:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:41 crc kubenswrapper[4895]: I1206 06:57:41.984039 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.020640 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.049728 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:42 crc kubenswrapper[4895]: E1206 06:57:42.049922 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.064065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.064128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.064137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.064158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.064170 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.064393 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.099430 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.142935 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.167323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.167375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.167434 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.167453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.167464 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.180162 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.269550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.269593 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.269600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.269616 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.269626 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.356896 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b85a3f9-a505-4331-a3e2-08a6211defcf" containerID="daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416" exitCode=0 Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.356987 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerDied","Data":"daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.372822 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.372873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.372885 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.372903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.372914 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.382453 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.420339 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.435847 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.458719 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.475310 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.477134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.477172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.477182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.477199 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.477216 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.498501 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.515918 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.537547 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.562050 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.581302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.581355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.581370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.581391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.581404 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.596247 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.624638 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.662166 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.687323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.687351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.687361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.687376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.687388 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.705284 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.765855 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.790093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.790136 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.790151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.790174 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.790186 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.892918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.892952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.892964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.892978 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.892987 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.995610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.995656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.995665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.995685 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:42 crc kubenswrapper[4895]: I1206 06:57:42.995694 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:42Z","lastTransitionTime":"2025-12-06T06:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.050591 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.050591 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:43 crc kubenswrapper[4895]: E1206 06:57:43.050777 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:43 crc kubenswrapper[4895]: E1206 06:57:43.050812 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.098583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.098649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.098664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.098685 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.098698 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.201622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.201679 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.201692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.201710 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.201722 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.304070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.304118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.304130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.304150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.304162 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.365596 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.368540 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b85a3f9-a505-4331-a3e2-08a6211defcf" containerID="b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14" exitCode=0 Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.368585 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerDied","Data":"b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.383777 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.403698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.407741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.407789 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.407800 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.407819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.407841 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.424111 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.444396 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.456502 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.471241 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.485241 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.499524 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.513052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.513095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.513107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.513122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.513133 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.514773 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.533320 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.553058 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.565645 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.575781 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.596167 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:43Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.616425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.616496 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.616512 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.616533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.616548 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.719232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.719286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.719298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.719317 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.719330 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.822569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.822616 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.822627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.822642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.822651 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.925829 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.925909 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.925923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.925939 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:43 crc kubenswrapper[4895]: I1206 06:57:43.925950 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:43Z","lastTransitionTime":"2025-12-06T06:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.028724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.028762 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.028773 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.028792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.028807 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.052357 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.052507 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.132025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.132100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.132111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.132127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.132139 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.235996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.236028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.236038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.236054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.236066 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.341823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.341861 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.341870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.341887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.341896 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.379220 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerStarted","Data":"6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.399754 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.415113 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.430711 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.444002 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.444896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.444923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.444934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.444949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.444959 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.455774 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.469459 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.489939 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.503593 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.517315 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.530631 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.545940 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.547499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.547533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.547544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.547567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.547580 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.558859 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.570760 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.586352 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:44Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.650834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.650889 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.650902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.650924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.650939 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.718350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.718437 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.718783 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.718890 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:52.718865273 +0000 UTC m=+35.120254183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.719530 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.719588 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:52.719575873 +0000 UTC m=+35.120964743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.754923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.755325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.755347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.755368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.755382 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.819262 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.819413 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.819594 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:57:52.819550828 +0000 UTC m=+35.220939698 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.819644 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.819669 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.819685 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.819744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.819765 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:52.819743604 +0000 UTC m=+35.221132674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.820033 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.820060 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.820072 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:44 crc kubenswrapper[4895]: E1206 06:57:44.820159 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:52.820131834 +0000 UTC m=+35.221520884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.858515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.858610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.858626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.858647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.858660 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.961782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.961842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.961854 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.961874 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:44 crc kubenswrapper[4895]: I1206 06:57:44.961887 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:44Z","lastTransitionTime":"2025-12-06T06:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.051049 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.051160 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:45 crc kubenswrapper[4895]: E1206 06:57:45.051210 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:45 crc kubenswrapper[4895]: E1206 06:57:45.051407 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.066508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.066563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.066577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.066601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.066615 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.169529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.169582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.169591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.169612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.169629 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.272692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.272743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.272754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.272772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.272784 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.376122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.376184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.376196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.376217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.376229 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.478878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.478927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.478938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.478955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.478965 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.581733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.582184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.582198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.582243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.582257 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.684839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.684912 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.684927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.684949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.684962 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.788065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.788118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.788133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.788156 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.788169 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.891419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.891554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.891573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.891594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.891614 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.994677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.994725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.994738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.994757 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:45 crc kubenswrapper[4895]: I1206 06:57:45.994770 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:45Z","lastTransitionTime":"2025-12-06T06:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.050342 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:46 crc kubenswrapper[4895]: E1206 06:57:46.050585 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.097358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.097387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.097395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.097412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.097422 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.200549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.200607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.200621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.200639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.200651 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.304466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.304534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.304549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.304573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.304587 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.413903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.413945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.413954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.413969 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.413980 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.427531 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b85a3f9-a505-4331-a3e2-08a6211defcf" containerID="6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6" exitCode=0 Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.427603 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerDied","Data":"6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.433353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.433925 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.434123 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.465243 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.477196 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.479427 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.488460 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.503327 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.516808 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.518182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.518232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.518243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.518261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.518271 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.532705 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.559348 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.573456 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.587660 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.603423 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.618144 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.620878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.620918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.620931 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.620952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.620966 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.632672 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.644635 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.655604 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.666255 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.680290 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.699147 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.717384 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.723218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.723380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.723444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.723533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.723599 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.731499 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.745581 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.761027 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.775941 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.790694 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.803960 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.815082 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.826501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.826560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.826573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.826592 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.826608 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.827044 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.840926 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.852087 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.869711 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:46Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.928656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.928704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.928718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.928737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:46 crc kubenswrapper[4895]: I1206 06:57:46.928749 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:46Z","lastTransitionTime":"2025-12-06T06:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.032012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.032049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.032057 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.032073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.032083 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.049929 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:47 crc kubenswrapper[4895]: E1206 06:57:47.050053 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.050329 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:47 crc kubenswrapper[4895]: E1206 06:57:47.050388 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.134736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.134775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.134784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.134823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.134834 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.238318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.238367 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.238391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.238408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.238420 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.341932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.341990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.342015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.342034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.342046 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.440436 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b85a3f9-a505-4331-a3e2-08a6211defcf" containerID="72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59" exitCode=0 Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.440527 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerDied","Data":"72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.441027 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.450376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.450774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.450908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.451004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.451106 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.469203 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.488410 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.503763 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.522037 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.542622 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.553972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.554019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.554032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.554053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.554068 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.556343 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.575388 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.593575 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.609629 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.629612 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.650101 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.656744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.656779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.656788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.656803 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.656815 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.663450 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.677430 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.703255 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:47Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.759306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.759344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.759353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.759370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.759380 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.861855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.861891 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.861901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.861918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.861931 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.964214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.964269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.964282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.964301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:47 crc kubenswrapper[4895]: I1206 06:57:47.964314 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:47Z","lastTransitionTime":"2025-12-06T06:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.049990 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:48 crc kubenswrapper[4895]: E1206 06:57:48.050144 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.065752 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.066790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.066825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.066838 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.066858 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.066870 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.088911 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.105787 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.122004 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.136868 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.155267 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.165635 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.172713 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.172759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.172801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.172822 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.172835 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.176532 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.191189 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.208127 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.222414 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.235284 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.252072 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.268776 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.275699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.275743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.275755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.275773 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.275785 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.285306 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.378869 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.378913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.378926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.378946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.378958 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.448579 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" event={"ID":"9b85a3f9-a505-4331-a3e2-08a6211defcf","Type":"ContainerStarted","Data":"460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.465384 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.482567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.482623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.482635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.482657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.482676 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.487262 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.502327 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.517610 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.532327 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.549286 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.568967 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.582252 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.584937 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.585001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.585017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.585035 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.585046 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.596315 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.615942 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.628799 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.645008 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.658940 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.673267 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.689378 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.689418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.689430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.689453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.689465 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.800672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.800712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.800722 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.800740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.800749 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.846447 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.864196 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.880195 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.897277 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.903549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.903603 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.903617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.903640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.903654 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:48Z","lastTransitionTime":"2025-12-06T06:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.912885 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.931028 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.946261 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.959153 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.972278 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.987111 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:48 crc kubenswrapper[4895]: I1206 06:57:48.999799 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.006456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.006639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.006658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.006681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.006696 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.013051 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.026434 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.037680 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.049789 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.049826 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.049942 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.050009 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.057698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.109492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.109547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.109559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.109586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.109601 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.211596 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.211658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.211669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.211690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.211703 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.314678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.314753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.314777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.314805 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.314831 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.417101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.417172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.417186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.417207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.417224 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.520234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.520321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.520335 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.520356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.520371 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.609524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.609618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.609631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.609654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.609668 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.631278 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.636299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.636344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.636355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.636379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.636397 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.650434 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.655325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.655371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.655380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.655397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.655408 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.668667 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.673809 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.673847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.673857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.673873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.673884 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.685650 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.689787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.689836 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.689847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.689864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.689892 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.703019 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:49 crc kubenswrapper[4895]: E1206 06:57:49.703138 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.705164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.705225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.705243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.705264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.705277 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.808151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.808212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.808223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.808244 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.808258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.911980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.912076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.912093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.912117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:49 crc kubenswrapper[4895]: I1206 06:57:49.912128 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:49Z","lastTransitionTime":"2025-12-06T06:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.015087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.015137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.015147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.015168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.015181 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.049586 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:50 crc kubenswrapper[4895]: E1206 06:57:50.049741 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.118395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.118442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.118453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.118490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.118504 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.221265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.221314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.221326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.221344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.221357 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.323618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.323690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.323708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.323734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.323752 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.426929 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.426984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.427005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.427032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.427053 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.457591 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/0.log" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.460288 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191" exitCode=1 Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.460352 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.461177 4895 scope.go:117] "RemoveContainer" containerID="89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.479193 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.500261 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.515618 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.528238 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.529752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.529797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.529812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.529836 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.529853 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.546575 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.562300 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.574536 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.587809 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.601464 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.615299 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.629883 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.632891 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.632924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.632936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.632951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.632962 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.644132 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.657052 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.676937 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.735087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.735133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.735142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.735158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.735168 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.837393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.837433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.837442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.837456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.837466 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.939942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.939987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.939998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.940034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:50 crc kubenswrapper[4895]: I1206 06:57:50.940047 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:50Z","lastTransitionTime":"2025-12-06T06:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.043362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.043399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.043407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.043423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.043432 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.049825 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.050082 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:51 crc kubenswrapper[4895]: E1206 06:57:51.050250 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:51 crc kubenswrapper[4895]: E1206 06:57:51.050415 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.147548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.147620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.147638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.147671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.147695 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.250382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.250523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.250547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.250573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.250597 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.353644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.353695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.353710 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.353727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.353742 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.456580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.456808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.456817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.456834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.456852 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.466007 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/0.log" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.470029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.470669 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.485435 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.503692 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.515874 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.527952 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.544507 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.558166 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.560397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.560432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.560443 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.560464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.560494 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.577996 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.595270 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.617417 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.644590 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.663347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.663411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.663422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.663445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.663466 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.676643 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.701392 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v"] Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.702004 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.704978 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.705051 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.705239 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.727988 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.747841 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.759643 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.766382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.766432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.766446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.766465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.766491 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.775512 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.792521 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.809901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.809958 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.809987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmw74\" (UniqueName: \"kubernetes.io/projected/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-kube-api-access-gmw74\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.810073 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.810546 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.825503 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.838168 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.851262 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.869507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.869568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.869579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.869600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.869613 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.871789 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.884395 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.900613 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.910947 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.911016 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.911043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmw74\" (UniqueName: \"kubernetes.io/projected/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-kube-api-access-gmw74\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.911074 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.911818 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.912196 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.917921 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.927693 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.932307 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmw74\" (UniqueName: \"kubernetes.io/projected/849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf-kube-api-access-gmw74\") pod \"ovnkube-control-plane-749d76644c-5qq8v\" (UID: \"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.941814 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.956984 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.972563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.972611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.972620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.972639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.972653 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:51Z","lastTransitionTime":"2025-12-06T06:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.973602 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:51 crc kubenswrapper[4895]: I1206 06:57:51.988642 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.015119 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" Dec 06 06:57:52 crc kubenswrapper[4895]: W1206 06:57:52.030720 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod849b6dd3_d8b0_4dc5_bf61_37c6ce394cdf.slice/crio-70f73eda699e9a738e34b9de8e30b86958b0bcabc812af75f894249e106aa126 WatchSource:0}: Error finding container 70f73eda699e9a738e34b9de8e30b86958b0bcabc812af75f894249e106aa126: Status 404 returned error can't find the container with id 70f73eda699e9a738e34b9de8e30b86958b0bcabc812af75f894249e106aa126 Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.052426 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.052610 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.075618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.075660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.075670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.075687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.075698 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.178624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.179326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.179338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.179355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.179369 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.282563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.282621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.282631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.282649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.282679 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.385722 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.385780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.385791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.385807 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.385817 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.479363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" event={"ID":"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf","Type":"ContainerStarted","Data":"70f73eda699e9a738e34b9de8e30b86958b0bcabc812af75f894249e106aa126"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.488752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.488788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.488799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.488815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.488826 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.591317 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.591374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.591387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.591405 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.591416 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.694457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.694521 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.694530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.694547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.694558 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.719745 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.719788 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.719905 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.719966 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:08.719947374 +0000 UTC m=+51.121336244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.720449 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.720535 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:08.720521189 +0000 UTC m=+51.121910059 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.797451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.797535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.797550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.797568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.797580 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.823303 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.823422 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.823450 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.823600 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.823626 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.823639 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.823696 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:58:08.823645141 +0000 UTC m=+51.225034051 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.823766 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:08.823744274 +0000 UTC m=+51.225133184 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.823981 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.824088 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.824210 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.824308 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:08.824283848 +0000 UTC m=+51.225672768 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.826025 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dzrsj"] Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.826528 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:52 crc kubenswrapper[4895]: E1206 06:57:52.826595 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.848136 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.865559 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.891816 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.900211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.900256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.900267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.900282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.900293 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:52Z","lastTransitionTime":"2025-12-06T06:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.910698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.924124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.924179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hf8\" (UniqueName: \"kubernetes.io/projected/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-kube-api-access-k6hf8\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.926697 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.940936 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.955718 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.970197 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:52 crc kubenswrapper[4895]: I1206 06:57:52.984310 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.001088 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.003875 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.003907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.003917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.003936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.003950 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.017053 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.025461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.025523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hf8\" (UniqueName: \"kubernetes.io/projected/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-kube-api-access-k6hf8\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.025652 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.025731 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:53.52571224 +0000 UTC m=+35.927101110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.033554 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.043867 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hf8\" (UniqueName: \"kubernetes.io/projected/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-kube-api-access-k6hf8\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.049545 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.049683 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.049931 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.050145 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.053655 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.069847 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.082256 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.097320 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.106693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.106735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.106744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.106761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.106772 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.210673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.210723 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.210734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.210750 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.210760 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.313558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.313623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.313631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.313646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.313656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.417828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.417882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.417893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.417911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.417922 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.485240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" event={"ID":"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf","Type":"ContainerStarted","Data":"d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.485306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" event={"ID":"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf","Type":"ContainerStarted","Data":"677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.488958 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/1.log" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.489610 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/0.log" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.493268 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d" exitCode=1 Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.493317 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.493396 4895 scope.go:117] "RemoveContainer" containerID="89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.494246 4895 scope.go:117] "RemoveContainer" containerID="84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d" Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.494504 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.518611 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.520645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.520704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.520719 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.520742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.520754 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.532112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.532369 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:53 crc kubenswrapper[4895]: E1206 06:57:53.532527 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:54.532488618 +0000 UTC m=+36.933877488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.535794 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.558616 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.574258 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.591458 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.606215 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.620895 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.623030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.623076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.623087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.623106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.623116 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.635752 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.652635 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.665177 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.680982 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.696572 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.711918 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.725464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.725530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.725542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.725563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.725578 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.726395 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.739521 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.752091 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.776708 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.795211 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.809269 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.828136 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.828717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.828751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.828763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.828780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.828792 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.851198 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.864943 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.878831 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.890941 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.905107 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.918063 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.931755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.931758 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.931794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.931804 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.931823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.931837 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:53Z","lastTransitionTime":"2025-12-06T06:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.943438 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.954853 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.966548 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.976948 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:53 crc kubenswrapper[4895]: I1206 06:57:53.989997 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:53Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.034430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.034502 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.034517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.034537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.034554 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.050345 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:54 crc kubenswrapper[4895]: E1206 06:57:54.050620 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.136980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.137025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.137036 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.137055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.137066 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.239967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.240008 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.240018 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.240035 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.240049 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.343581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.343633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.343645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.343668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.343681 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.447461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.447536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.447545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.447566 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.447577 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.499726 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/1.log" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.542685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:54 crc kubenswrapper[4895]: E1206 06:57:54.542943 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:54 crc kubenswrapper[4895]: E1206 06:57:54.543058 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:57:56.543036811 +0000 UTC m=+38.944425681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.549930 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.549981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.549999 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.550017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.550026 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.652975 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.653023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.653034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.653052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.653065 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.755303 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.755347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.755355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.755369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.755379 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.858608 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.858678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.858690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.858709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.858722 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.961719 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.961762 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.961779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.961799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:54 crc kubenswrapper[4895]: I1206 06:57:54.961814 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:54Z","lastTransitionTime":"2025-12-06T06:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.049881 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.049937 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:55 crc kubenswrapper[4895]: E1206 06:57:55.050030 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.049961 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:55 crc kubenswrapper[4895]: E1206 06:57:55.050146 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:55 crc kubenswrapper[4895]: E1206 06:57:55.050255 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.064665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.064693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.064702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.064717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.064727 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.167869 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.167917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.167957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.167979 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.167997 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.271497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.271538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.271548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.271565 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.271574 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.374622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.374688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.374711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.374741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.374796 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.478086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.478130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.478139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.478155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.478167 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.584349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.584409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.584433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.584455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.584475 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.688265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.688311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.688321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.688341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.688351 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.799546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.799610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.799625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.799648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.799665 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.902183 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.902237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.902252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.902272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:55 crc kubenswrapper[4895]: I1206 06:57:55.902284 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:55Z","lastTransitionTime":"2025-12-06T06:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.005775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.005833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.005847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.005869 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.005885 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.050955 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:56 crc kubenswrapper[4895]: E1206 06:57:56.051154 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.109109 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.109154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.109164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.109183 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.109198 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.212425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.212568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.212581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.212600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.212612 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.316044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.316121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.316132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.316154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.316166 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.420257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.420314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.420325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.420346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.420359 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.522385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.522431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.522441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.522462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.522494 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.565328 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:56 crc kubenswrapper[4895]: E1206 06:57:56.565674 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:56 crc kubenswrapper[4895]: E1206 06:57:56.565800 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:00.56576745 +0000 UTC m=+42.967156500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.625058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.625106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.625115 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.625130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.625142 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.728489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.728553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.728564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.728584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.728598 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.832654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.832727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.832746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.832777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.832835 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.936796 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.936898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.936924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.936965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:56 crc kubenswrapper[4895]: I1206 06:57:56.936989 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:56Z","lastTransitionTime":"2025-12-06T06:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.041593 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.041645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.041656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.041676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.041688 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.050118 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:57 crc kubenswrapper[4895]: E1206 06:57:57.050303 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.050344 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.050410 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:57 crc kubenswrapper[4895]: E1206 06:57:57.050538 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:57:57 crc kubenswrapper[4895]: E1206 06:57:57.050676 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.143874 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.143921 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.143937 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.143957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.143967 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.247820 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.247873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.247888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.247911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.247923 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.351041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.351109 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.351125 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.351148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.351161 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.454516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.454562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.454572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.454589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.454599 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.558148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.558221 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.558235 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.558255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.558267 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.660918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.660961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.660973 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.660990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.661033 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.765256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.765329 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.765364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.765392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.765407 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.870101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.870163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.870173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.870191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.870202 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.973665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.973718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.973728 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.973745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:57 crc kubenswrapper[4895]: I1206 06:57:57.973756 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:57Z","lastTransitionTime":"2025-12-06T06:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.049611 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:57:58 crc kubenswrapper[4895]: E1206 06:57:58.049772 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.071951 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b39e94136cc9cd89675768bdab3f5d440dfd356f31e84f8a16982eb01f2191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:49Z\\\",\\\"message\\\":\\\"kg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876411 6130 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 06:57:48.876866 6130 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:57:48.877655 6130 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:57:48.878155 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:57:48.878282 6130 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 06:57:48.878296 6130 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 06:57:48.878328 6130 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 06:57:48.878332 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:57:48.878378 6130 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 06:57:48.878789 6130 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:57:48.878848 6130 factory.go:656] Stopping watch factory\\\\nI1206 06:57:48.878865 6130 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.076336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.076385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.076396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.076417 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.076432 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.087714 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.101206 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.116169 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.132722 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.148514 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.165374 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.179596 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.179630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.179640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.179659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.179669 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.184772 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.199364 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.212445 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.225149 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.241009 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.255403 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.268024 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.279098 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.283338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.283391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.283407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.283433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.283457 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.292190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:58Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.387595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.387681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.387697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.387716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.387728 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.491057 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.491121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.491133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.491156 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.491171 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.594346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.594438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.594452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.594491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.594509 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.697336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.697414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.697428 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.697446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.697458 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.800605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.800659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.800671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.800691 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.800703 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.903975 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.904032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.904047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.904065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:58 crc kubenswrapper[4895]: I1206 06:57:58.904085 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:58Z","lastTransitionTime":"2025-12-06T06:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.007798 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.007843 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.007854 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.007873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.007884 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.050595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.050690 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.050749 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.050866 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.050925 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.050977 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.112116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.112977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.113079 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.113199 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.113304 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.216241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.216299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.216312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.216331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.216343 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.319176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.319289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.319302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.319322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.319338 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.422355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.422760 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.422846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.422936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.423066 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.525234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.525269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.525280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.525298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.525310 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.628623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.629280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.629359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.629496 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.629587 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.733571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.733628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.733641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.733661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.733675 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.773517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.773570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.773586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.773607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.773619 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.787599 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.793037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.793215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.793284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.793385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.793510 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.808436 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.813441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.813532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.813545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.813567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.813581 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.881689 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.886252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.886308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.886323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.886351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.886368 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.905633 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.909792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.909829 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.909842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.909861 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.909874 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.922189 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:57:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:57:59 crc kubenswrapper[4895]: E1206 06:57:59.922359 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.924570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.924628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.924638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.924658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:57:59 crc kubenswrapper[4895]: I1206 06:57:59.924675 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:57:59Z","lastTransitionTime":"2025-12-06T06:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.027537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.027587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.027597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.027612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.027623 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.049994 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:00 crc kubenswrapper[4895]: E1206 06:58:00.050158 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.131098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.131146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.131155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.131172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.131183 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.234053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.234098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.234113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.234130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.234139 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.337145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.337193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.337203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.337223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.337236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.440278 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.440344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.440358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.440384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.440403 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.543438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.543530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.543545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.543564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.543577 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.607597 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:00 crc kubenswrapper[4895]: E1206 06:58:00.607807 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:00 crc kubenswrapper[4895]: E1206 06:58:00.607930 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:08.607898469 +0000 UTC m=+51.009287339 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.647455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.647546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.647562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.647581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.647594 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.750787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.750840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.750858 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.750878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.750890 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.853282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.853328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.853341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.853358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.853368 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.956040 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.956092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.956105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.956126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:00 crc kubenswrapper[4895]: I1206 06:58:00.956141 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:00Z","lastTransitionTime":"2025-12-06T06:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.050268 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.050333 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.050399 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:01 crc kubenswrapper[4895]: E1206 06:58:01.050558 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:01 crc kubenswrapper[4895]: E1206 06:58:01.050701 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:01 crc kubenswrapper[4895]: E1206 06:58:01.050765 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.058547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.058591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.058604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.058627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.058643 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.162105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.162167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.162180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.162201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.162213 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.265193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.265253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.265267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.265286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.265318 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.367906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.367959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.367973 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.367991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.368003 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.471095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.471198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.471213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.471239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.471252 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.577339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.577395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.577407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.577424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.577438 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.680610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.680665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.680675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.680693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.680705 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.783378 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.783459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.783488 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.783507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.783522 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.886165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.886206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.886218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.886239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.886252 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.989353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.989387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.989397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.989412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:01 crc kubenswrapper[4895]: I1206 06:58:01.989421 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:01Z","lastTransitionTime":"2025-12-06T06:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.050349 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:02 crc kubenswrapper[4895]: E1206 06:58:02.050747 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.092371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.092425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.092438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.092456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.092493 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.196831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.196901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.196925 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.196958 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.196984 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.301154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.301219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.301234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.301255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.301269 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.403883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.403943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.403957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.403984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.404001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.507540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.507593 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.507604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.507626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.507638 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.611549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.611649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.611667 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.611696 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.611739 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.714624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.714686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.714699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.714718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.714734 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.817855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.817897 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.817908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.817925 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.817936 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.920317 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.920397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.920408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.920424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:02 crc kubenswrapper[4895]: I1206 06:58:02.920435 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:02Z","lastTransitionTime":"2025-12-06T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.023907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.023955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.023964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.023983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.023996 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.050375 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.050430 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.050548 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:03 crc kubenswrapper[4895]: E1206 06:58:03.050607 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:03 crc kubenswrapper[4895]: E1206 06:58:03.050635 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:03 crc kubenswrapper[4895]: E1206 06:58:03.050744 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.127709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.127774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.127791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.127814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.127827 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.231093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.231163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.231187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.231217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.231236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.334500 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.334560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.334573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.334595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.334622 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.438044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.438114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.438133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.438163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.438181 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.541873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.541942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.541961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.541987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.542008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.644781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.644818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.644894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.644915 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.644929 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.747388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.747432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.747442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.747457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.747470 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.851646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.851716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.851728 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.851746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.851757 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.954775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.954831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.954843 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.954862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:03 crc kubenswrapper[4895]: I1206 06:58:03.955214 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:03Z","lastTransitionTime":"2025-12-06T06:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.049738 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:04 crc kubenswrapper[4895]: E1206 06:58:04.049943 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.058189 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.058229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.058242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.058261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.058274 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.161078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.161133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.161146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.161165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.161178 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.264672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.264742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.264763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.264812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.264831 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.368343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.368437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.368465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.368571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.368642 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.476896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.476946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.476978 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.477000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.477013 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.581004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.581084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.581103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.581129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.581150 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.684543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.684597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.684635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.684654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.684669 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.787968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.788015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.788025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.788043 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.788057 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.891779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.891844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.891859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.891882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.891899 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.997000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.997050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.997062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.997088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:04 crc kubenswrapper[4895]: I1206 06:58:04.997196 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:04Z","lastTransitionTime":"2025-12-06T06:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.049831 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.049901 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.049954 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:05 crc kubenswrapper[4895]: E1206 06:58:05.050031 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:05 crc kubenswrapper[4895]: E1206 06:58:05.050441 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:05 crc kubenswrapper[4895]: E1206 06:58:05.050528 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.050844 4895 scope.go:117] "RemoveContainer" containerID="84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.063455 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.075404 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.097109 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.101344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.101387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.101397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.101421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.101431 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.113883 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.127525 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.142637 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.157577 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.171500 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.187764 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.202539 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.206642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.206695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.206708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.206731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.206747 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.214827 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.226316 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.242264 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.257671 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.273219 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.285430 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.309374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.309408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.309416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.309431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.309441 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.411871 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.411911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.411926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.411943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.411955 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.514852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.514884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.514893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.514909 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.514921 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.546232 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/1.log" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.549972 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.550945 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.618122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.618204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.618218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.618242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.618264 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.626295 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.643107 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.658013 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.671616 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.684784 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.699312 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.731878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.731941 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.731952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.731971 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.731983 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.731963 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.745712 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.760721 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.777462 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.791824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.805285 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.818289 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.829335 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.840088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.840171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.840202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.840233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.840250 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.847868 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.866463 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:05Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.942688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.942729 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.942739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.942756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:05 crc kubenswrapper[4895]: I1206 06:58:05.942776 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:05Z","lastTransitionTime":"2025-12-06T06:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.045830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.045871 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.045879 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.045896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.045906 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.049784 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:06 crc kubenswrapper[4895]: E1206 06:58:06.049952 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.149569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.149621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.149633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.149657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.149669 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.252436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.252517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.252532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.252554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.252566 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.355175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.355233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.355248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.355269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.355286 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.458301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.458360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.458372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.458391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.458405 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.561631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.561692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.561705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.561725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.561741 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.664783 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.664835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.664850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.664873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.664885 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.768145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.768216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.768231 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.768252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.768267 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.870958 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.871026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.871045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.871071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.871089 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.974081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.974129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.974141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.974160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:06 crc kubenswrapper[4895]: I1206 06:58:06.974173 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:06Z","lastTransitionTime":"2025-12-06T06:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.050336 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.050438 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:07 crc kubenswrapper[4895]: E1206 06:58:07.050537 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.050449 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:07 crc kubenswrapper[4895]: E1206 06:58:07.050681 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:07 crc kubenswrapper[4895]: E1206 06:58:07.050898 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.076844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.076913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.076929 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.076954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.076971 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.180167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.180211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.180224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.180243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.180255 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.282757 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.282815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.282828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.282845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.282856 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.386137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.386180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.386190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.386209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.386221 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.488795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.488900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.488927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.488965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.488990 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.559575 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/2.log" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.564787 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/1.log" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.572850 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7" exitCode=1 Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.572912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.572976 4895 scope.go:117] "RemoveContainer" containerID="84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.579306 4895 scope.go:117] "RemoveContainer" containerID="6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7" Dec 06 06:58:07 crc kubenswrapper[4895]: E1206 06:58:07.579694 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.591501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.591554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.591565 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.591584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.591597 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.599740 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.613675 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.628353 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.643145 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.657680 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.670661 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.684154 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.696982 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.698209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.698254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.698264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.698283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.698295 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.712401 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:07 crc kubenswrapper[4895]: I1206 06:58:07.727876 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:07Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.801069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.801103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.801113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.801130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.801142 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.904581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.904652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.904675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.904707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:07.904730 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:07Z","lastTransitionTime":"2025-12-06T06:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.007533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.007599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.007639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.007663 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.007675 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.050005 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.050191 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.111195 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.111242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.111251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.111274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.111287 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.213581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.213627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.213639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.213656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.213667 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.291278 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.305868 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.316826 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.317039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.317077 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.317091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.317110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.317123 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.330635 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.344496 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.360166 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.379054 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.394818 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.412949 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.421797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.421855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.421868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.421888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.421900 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.426862 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.441747 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.456114 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.469078 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.481517 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.497351 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.512590 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.525690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.525752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.525766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.525791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.525803 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.530234 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.545544 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.561694 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.576160 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.579079 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/2.log" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.588111 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.600841 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:08Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.629554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.629590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.629604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.629622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.629635 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.642413 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.642611 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.642675 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:24.642656854 +0000 UTC m=+67.044045734 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.732504 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.732550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.732563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.732581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.732592 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.743937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.744163 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.744079 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.744429 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:40.744409948 +0000 UTC m=+83.145798818 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.744286 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.744555 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:40.744539021 +0000 UTC m=+83.145927891 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.836708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.836791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.836818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.836852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.836875 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.845318 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.845520 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:58:40.845462624 +0000 UTC m=+83.246851534 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.845659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.845705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.845884 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.845975 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.845999 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.846157 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:40.846096081 +0000 UTC m=+83.247484991 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.845890 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.846289 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.846327 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:08 crc kubenswrapper[4895]: E1206 06:58:08.846460 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:40.846420899 +0000 UTC m=+83.247809949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.940617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.940701 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.940726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.940756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:08 crc kubenswrapper[4895]: I1206 06:58:08.940776 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:08Z","lastTransitionTime":"2025-12-06T06:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.045404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.045469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.045506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.045525 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.045539 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.050162 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.050190 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.050359 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:09 crc kubenswrapper[4895]: E1206 06:58:09.050533 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:09 crc kubenswrapper[4895]: E1206 06:58:09.050750 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:09 crc kubenswrapper[4895]: E1206 06:58:09.050924 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.148506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.148555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.148564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.148585 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.148597 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.251194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.251239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.251251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.251272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.251286 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.354287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.354343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.354358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.354377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.354388 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.457527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.457601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.457612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.457631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.457643 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.561756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.561868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.561893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.561931 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.561965 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.664757 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.664815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.664827 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.664849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.664864 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.731648 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.742988 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.747750 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.764848 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.767353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.767404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.767417 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.767440 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.767453 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.780250 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.793175 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.805256 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.816947 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.827505 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.840570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.863719 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.870376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.870429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.870443 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.870464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.870500 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.882371 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.898750 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.916688 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.932827 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.946572 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.960644 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.974025 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.974454 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.974522 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.974535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.974554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:09 crc kubenswrapper[4895]: I1206 06:58:09.974567 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:09Z","lastTransitionTime":"2025-12-06T06:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.049911 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.050120 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.077498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.077558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.077581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.077646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.077675 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.180034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.180074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.180082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.180098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.180108 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.283856 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.283919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.283942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.283970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.283987 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.323544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.323602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.323619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.323640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.323655 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.343172 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.348354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.348397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.348409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.348428 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.348439 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.365820 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.375656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.375928 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.376095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.376281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.376425 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.395193 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.399996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.400066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.400081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.400106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.400121 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.416378 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.420848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.421144 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.421229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.421365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.421446 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.436555 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:10 crc kubenswrapper[4895]: E1206 06:58:10.436788 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.439177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.439212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.439224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.439244 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.439258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.542770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.542835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.542849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.542870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.542882 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.645904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.646045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.646106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.646138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.646195 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.749600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.749633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.749643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.749660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.749670 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.852326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.852847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.853003 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.853138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.853275 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.955727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.955763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.955773 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.955787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:10 crc kubenswrapper[4895]: I1206 06:58:10.955797 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:10Z","lastTransitionTime":"2025-12-06T06:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.050057 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:11 crc kubenswrapper[4895]: E1206 06:58:11.050306 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.050853 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:11 crc kubenswrapper[4895]: E1206 06:58:11.050964 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.051044 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:11 crc kubenswrapper[4895]: E1206 06:58:11.051127 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.058551 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.058621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.058632 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.058652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.058666 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.161487 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.161544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.161556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.161602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.161614 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.264728 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.265074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.265198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.265284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.265376 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.368243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.368534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.368572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.368604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.368627 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.472084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.472169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.472197 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.472229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.472252 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.575414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.575558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.575584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.575617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.575643 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.678431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.678559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.678585 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.678618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.678642 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.781787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.781830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.781839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.781855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.781865 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.888432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.888834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.888869 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.888910 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.888940 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.993082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.993127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.993139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.993158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:11 crc kubenswrapper[4895]: I1206 06:58:11.993171 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:11Z","lastTransitionTime":"2025-12-06T06:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.049832 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:12 crc kubenswrapper[4895]: E1206 06:58:12.050049 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.095980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.096052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.096065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.096086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.096097 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.199289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.199356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.199373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.199399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.199432 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.302618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.302694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.302713 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.302747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.302767 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.405431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.405501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.405515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.405533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.405546 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.509054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.509137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.509157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.509182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.509200 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.611658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.611707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.611718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.611738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.611755 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.715630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.715686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.715752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.715778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.715795 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.818734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.818804 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.818814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.818831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.818843 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.921567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.921605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.921615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.921631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:12 crc kubenswrapper[4895]: I1206 06:58:12.921641 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:12Z","lastTransitionTime":"2025-12-06T06:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.024677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.024759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.024778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.024810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.024834 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.049584 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.049620 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.049746 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:13 crc kubenswrapper[4895]: E1206 06:58:13.049789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:13 crc kubenswrapper[4895]: E1206 06:58:13.049916 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:13 crc kubenswrapper[4895]: E1206 06:58:13.050087 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.127818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.127910 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.127950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.127988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.128012 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.231575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.231626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.231637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.231655 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.231666 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.334022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.334111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.334140 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.334174 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.334197 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.437078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.437165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.437196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.437227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.437245 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.540745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.540802 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.540824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.540852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.540874 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.643892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.643966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.643983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.644016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.644034 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.746913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.746948 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.746956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.746970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.746979 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.849777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.849819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.849828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.849848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.849858 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.952394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.952446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.952459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.952491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:13 crc kubenswrapper[4895]: I1206 06:58:13.952507 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:13Z","lastTransitionTime":"2025-12-06T06:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.050747 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:14 crc kubenswrapper[4895]: E1206 06:58:14.050928 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.054739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.054769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.054779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.054797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.054808 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.157226 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.157276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.157288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.157308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.157323 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.261561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.261705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.261715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.261733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.261745 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.364811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.364862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.364876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.364897 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.364910 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.467161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.467211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.467228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.467256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.467275 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.569961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.570045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.570064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.570091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.570113 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.673588 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.673629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.673637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.673651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.673661 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.777118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.777201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.777220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.777248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.777263 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.880639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.880684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.880693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.880708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.880720 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.984076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.984137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.984150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.984168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:14 crc kubenswrapper[4895]: I1206 06:58:14.984180 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:14Z","lastTransitionTime":"2025-12-06T06:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.050657 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.050756 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.050790 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:15 crc kubenswrapper[4895]: E1206 06:58:15.050824 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:15 crc kubenswrapper[4895]: E1206 06:58:15.050923 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:15 crc kubenswrapper[4895]: E1206 06:58:15.051041 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.087163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.087210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.087219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.087234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.087245 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.194619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.194705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.194716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.194734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.194747 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.298028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.298086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.298096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.298130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.298142 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.401762 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.401818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.401833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.401855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.401869 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.504588 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.504639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.504650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.504668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.504682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.607878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.607938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.607955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.607980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.608001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.712065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.712149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.712169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.712196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.712215 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.815815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.815870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.815882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.815902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.815914 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.919568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.919673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.919695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.919723 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:15 crc kubenswrapper[4895]: I1206 06:58:15.919742 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:15Z","lastTransitionTime":"2025-12-06T06:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.022418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.022459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.022467 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.022497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.022514 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.050211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:16 crc kubenswrapper[4895]: E1206 06:58:16.050370 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.126829 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.126927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.126950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.126984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.127020 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.231721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.231824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.231846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.231876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.231897 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.335093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.335169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.335192 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.335221 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.335243 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.439517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.439603 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.439623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.439652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.439676 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.542630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.542706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.542726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.542759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.542785 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.646584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.646677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.646688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.646703 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.646713 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.750552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.750613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.750626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.750648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.750664 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.854092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.854187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.854207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.854235 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.854258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.956830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.956901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.956953 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.956980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:16 crc kubenswrapper[4895]: I1206 06:58:16.956998 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:16Z","lastTransitionTime":"2025-12-06T06:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.050523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.050585 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.050549 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:17 crc kubenswrapper[4895]: E1206 06:58:17.050740 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:17 crc kubenswrapper[4895]: E1206 06:58:17.050928 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:17 crc kubenswrapper[4895]: E1206 06:58:17.051180 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.059314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.059379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.059391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.059409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.059423 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.162942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.163026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.163043 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.163070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.163093 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.266241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.266321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.266338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.266359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.266371 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.369806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.370194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.370247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.370295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.370317 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.474163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.474217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.474228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.474245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.474255 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.578069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.578185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.578213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.578242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.578258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.682180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.682292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.682306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.682354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.682370 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.786169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.786245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.786259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.786282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.786295 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.891285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.891346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.891361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.891382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:17 crc kubenswrapper[4895]: I1206 06:58:17.891395 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:17Z","lastTransitionTime":"2025-12-06T06:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.001793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.001862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.001880 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.001908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.001926 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.049651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:18 crc kubenswrapper[4895]: E1206 06:58:18.049881 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.065645 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.079116 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.091571 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.104345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.104437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.104549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.104576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.104587 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.107228 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.121310 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.135842 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.149862 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.164025 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.176252 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.194725 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b768633e43f93f89f737a70ac7a6ed148437978da78841342b7c3b4acdd70d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"message\\\":\\\"perator/machine-config-daemon-6k7r2 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v openshift-ovn-kubernetes/ovnkube-node-mhcxk openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1206 06:57:52.143914 6300 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1206 06:57:52.143927 6300 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:57:52.143942 6300 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143956 6300 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1206 06:57:52.143958 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:57:52.143985 6300 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1206 06:57:52.144028 6300 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF1206 06:57:52.144040 6300 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.207302 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.208732 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.208778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.208790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.208810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.208819 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.220113 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.237918 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.249561 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.261092 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.275458 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.289083 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:18Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.311162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.311201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.311213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.311232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.311246 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.414263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.414357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.414382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.414416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.414442 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.519019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.519087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.519102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.519129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.519267 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.621578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.621630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.621641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.621660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.621671 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.725392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.725959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.725970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.725990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.726002 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.828983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.829050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.829063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.829083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.829098 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.932513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.932563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.932578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.932600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:18 crc kubenswrapper[4895]: I1206 06:58:18.932615 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:18Z","lastTransitionTime":"2025-12-06T06:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.035771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.035826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.035841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.035863 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.035877 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.050146 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.050177 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:19 crc kubenswrapper[4895]: E1206 06:58:19.050287 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:19 crc kubenswrapper[4895]: E1206 06:58:19.050507 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.050508 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:19 crc kubenswrapper[4895]: E1206 06:58:19.050818 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.138887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.138950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.138964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.138986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.139001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.242148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.242193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.242202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.242218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.242229 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.345405 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.345445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.345456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.345485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.345496 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.448675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.448737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.448747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.448763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.448775 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.552013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.552066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.552080 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.552103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.552116 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.654338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.654407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.654423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.654445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.654462 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.757027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.757069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.757077 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.757095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.757115 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.860560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.860608 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.860622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.860643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.860656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.963270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.963358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.963373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.963397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:19 crc kubenswrapper[4895]: I1206 06:58:19.963418 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:19Z","lastTransitionTime":"2025-12-06T06:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.050509 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.050659 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.051372 4895 scope.go:117] "RemoveContainer" containerID="6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7" Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.051813 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.066161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.066222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.066234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.066262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.066279 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.072177 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.087305 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.102195 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.119764 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.135513 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.149415 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.167385 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.168962 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.169005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.169019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.169038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.169051 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.183658 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.197313 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.210665 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.225163 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.237512 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.247296 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.257314 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.266849 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.270989 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.271027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.271038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.271053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.271062 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.276798 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.287017 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.373583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.373633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.373651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.373675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.373694 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.477104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.477177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.477198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.477227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.477245 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.549197 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.549250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.549268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.549294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.549312 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.571913 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.576396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.576448 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.576465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.576548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.576568 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.595670 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.601086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.601140 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.601155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.601178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.601193 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.615957 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.621090 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.621141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.621153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.621168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.621178 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.637354 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.642263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.642315 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.642327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.642348 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.642368 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.657250 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:20 crc kubenswrapper[4895]: E1206 06:58:20.657427 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.659761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.659834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.659852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.659876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.659891 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.762501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.762553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.762564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.762581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.762591 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.866489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.866543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.866553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.866571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.866582 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.970991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.971038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.971048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.971066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:20 crc kubenswrapper[4895]: I1206 06:58:20.971079 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:20Z","lastTransitionTime":"2025-12-06T06:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.049995 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.050067 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.050101 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:21 crc kubenswrapper[4895]: E1206 06:58:21.050199 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:21 crc kubenswrapper[4895]: E1206 06:58:21.050395 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:21 crc kubenswrapper[4895]: E1206 06:58:21.050573 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.074533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.074575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.074584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.074600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.074610 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.177327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.177447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.177464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.177527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.177553 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.280060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.280114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.280123 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.280139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.280149 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.383996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.384048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.384058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.384076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.384093 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.487533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.487587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.487597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.487615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.487626 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.591084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.591123 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.591135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.591155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.591166 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.693850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.693891 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.693900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.693917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.693928 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.796944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.796984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.796997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.797017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.797033 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.899437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.899495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.899509 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.899527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:21 crc kubenswrapper[4895]: I1206 06:58:21.899539 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:21Z","lastTransitionTime":"2025-12-06T06:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.002779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.002828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.002839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.002858 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.002871 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.049893 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:22 crc kubenswrapper[4895]: E1206 06:58:22.050098 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.105488 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.105542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.105556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.105576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.105591 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.208508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.208568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.208579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.208595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.208605 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.311631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.311680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.311697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.311724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.311742 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.418370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.418840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.419046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.419245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.419415 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.522810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.523232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.523302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.523366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.523423 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.626419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.626446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.626454 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.626482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.626492 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.729310 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.729371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.729383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.729407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.729422 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.832559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.832613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.832627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.832649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.832662 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.935670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.935737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.935750 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.935774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:22 crc kubenswrapper[4895]: I1206 06:58:22.935792 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:22Z","lastTransitionTime":"2025-12-06T06:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.038872 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.038933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.038947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.038994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.039008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.050523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.050541 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.050529 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:23 crc kubenswrapper[4895]: E1206 06:58:23.050678 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:23 crc kubenswrapper[4895]: E1206 06:58:23.050839 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:23 crc kubenswrapper[4895]: E1206 06:58:23.050912 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.141150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.141196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.141211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.141228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.141241 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.244122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.244177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.244185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.244203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.244218 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.347191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.347273 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.347293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.347312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.347323 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.450819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.450904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.450959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.450998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.451027 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.553990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.554029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.554049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.554075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.554086 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.657357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.657419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.657430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.657450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.657463 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.760747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.760794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.760802 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.760818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.760846 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.863360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.863418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.863429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.863447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.863459 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.966155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.966213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.966224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.966243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:23 crc kubenswrapper[4895]: I1206 06:58:23.966254 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:23Z","lastTransitionTime":"2025-12-06T06:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.050381 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:24 crc kubenswrapper[4895]: E1206 06:58:24.050566 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.070024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.070072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.070085 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.070102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.070115 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.172855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.172907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.172919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.172939 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.172950 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.276179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.276225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.276237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.276256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.276274 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.379005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.379058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.379069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.379093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.379105 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.481536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.481602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.481615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.481638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.481656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.585217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.585325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.585349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.585375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.585393 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.648654 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:24 crc kubenswrapper[4895]: E1206 06:58:24.648901 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:24 crc kubenswrapper[4895]: E1206 06:58:24.649034 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 06:58:56.64900114 +0000 UTC m=+99.050390020 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.688680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.688736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.688751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.688782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.688803 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.792121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.792179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.792222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.792247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.792262 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.894819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.894884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.894898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.894919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.894934 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.997859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.997906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.997917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.997935 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:24 crc kubenswrapper[4895]: I1206 06:58:24.997946 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:24Z","lastTransitionTime":"2025-12-06T06:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.049997 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.050030 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.050053 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:25 crc kubenswrapper[4895]: E1206 06:58:25.050181 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:25 crc kubenswrapper[4895]: E1206 06:58:25.050370 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:25 crc kubenswrapper[4895]: E1206 06:58:25.050434 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.100961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.101024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.101037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.101059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.101074 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.239027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.239076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.239087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.239103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.239113 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.342200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.342265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.342275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.342294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.342306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.445217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.445260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.445272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.445292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.445306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.548326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.548384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.548398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.548415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.548425 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.650853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.650912 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.650927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.650946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.650959 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.754302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.754396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.754421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.754457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.754518 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.858376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.858442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.858466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.858523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.858557 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.961455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.961512 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.961524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.961540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:25 crc kubenswrapper[4895]: I1206 06:58:25.961552 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:25Z","lastTransitionTime":"2025-12-06T06:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.049818 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:26 crc kubenswrapper[4895]: E1206 06:58:26.050016 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.063815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.063878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.063890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.063905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.063933 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.167620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.167673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.167684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.167702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.167712 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.271635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.271707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.271733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.271763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.271783 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.375204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.375253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.375265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.375285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.375298 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.477559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.477607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.477620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.477638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.477649 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.580793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.580947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.580964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.580983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.580996 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.649221 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/0.log" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.649278 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1f42fc6-54ce-4f49-adbd-545e02a1f322" containerID="9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4" exitCode=1 Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.649330 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerDied","Data":"9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.649770 4895 scope.go:117] "RemoveContainer" containerID="9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.670009 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.684292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.684820 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.684832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.684894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.684911 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.687695 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.705662 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.726349 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.741363 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.755593 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.767245 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.782867 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.787229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.787267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.787278 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.787295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.787308 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.798149 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.812090 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.825807 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.840381 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.854222 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.872057 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.885187 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.889788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.889821 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.889832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.889849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.889860 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.898529 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.921693 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.994492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.994544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.994558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.994580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:26 crc kubenswrapper[4895]: I1206 06:58:26.994974 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:26Z","lastTransitionTime":"2025-12-06T06:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.050105 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.050146 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.050222 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:27 crc kubenswrapper[4895]: E1206 06:58:27.050264 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:27 crc kubenswrapper[4895]: E1206 06:58:27.050390 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:27 crc kubenswrapper[4895]: E1206 06:58:27.050552 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.099415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.099502 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.099523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.099549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.099566 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.203205 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.203238 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.203246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.203261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.203273 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.306243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.306284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.306294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.306310 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.306321 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.409886 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.409947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.409962 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.409982 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.409997 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.513030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.513070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.513079 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.513095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.513105 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.615780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.615844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.615853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.615870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.615879 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.654837 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/0.log" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.654905 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerStarted","Data":"b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.679588 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.697889 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.711961 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.722595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.722626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.722636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.722651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.722663 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.725604 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.741111 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.757598 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.777096 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.791766 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.803609 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.816492 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.825424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.825466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.825498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.825515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.825525 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.835165 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.848963 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.862931 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.876821 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.889698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.903530 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.918039 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.928702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.928755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.928766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.928785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:27 crc kubenswrapper[4895]: I1206 06:58:27.928797 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:27Z","lastTransitionTime":"2025-12-06T06:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.032555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.032613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.032628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.032646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.032660 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.049885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:28 crc kubenswrapper[4895]: E1206 06:58:28.050186 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.070706 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.085327 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.101862 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.117899 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.132246 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.134333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.134375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.134389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.135227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.135251 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.148857 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.162459 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.177011 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.189611 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.209128 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.224367 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.237866 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.237904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.237914 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.237932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.237943 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.240779 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.254256 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.271156 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.283054 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.294925 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.307548 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.340755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.340789 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.340799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.340816 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.340828 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.447853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.447911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.447923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.447940 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.447953 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.550894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.550930 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.550939 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.550957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.550970 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.654160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.654213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.654223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.654241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.654252 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.756959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.757005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.757016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.757034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.757044 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.860283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.860331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.860340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.860356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.860366 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.962995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.963050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.963065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.963089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:28 crc kubenswrapper[4895]: I1206 06:58:28.963105 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:28Z","lastTransitionTime":"2025-12-06T06:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.050418 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.050493 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.050504 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:29 crc kubenswrapper[4895]: E1206 06:58:29.050638 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:29 crc kubenswrapper[4895]: E1206 06:58:29.050748 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:29 crc kubenswrapper[4895]: E1206 06:58:29.050884 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.065644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.065689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.065702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.065727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.065741 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.168845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.168888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.168902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.168920 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.168932 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.271254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.271317 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.271326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.271344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.271354 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.375075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.375142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.375159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.375181 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.375193 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.478023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.478068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.478078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.478093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.478103 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.580934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.580988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.580997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.581016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.581026 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.683947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.683996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.684010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.684030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.684041 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.787087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.787128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.787139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.787158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.787171 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.890256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.890300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.890309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.890325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.890335 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.993169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.993227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.993243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.993264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:29 crc kubenswrapper[4895]: I1206 06:58:29.993279 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:29Z","lastTransitionTime":"2025-12-06T06:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.050225 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.050386 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.095589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.095631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.095641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.095659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.095671 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.199106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.199172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.199184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.199204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.199219 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.302137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.302199 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.302213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.302243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.302264 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.405500 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.405550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.405560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.405579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.405592 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.509041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.509098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.509112 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.509129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.509141 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.611921 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.611977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.611988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.612007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.612019 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.715190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.715259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.715274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.715297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.715312 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.818467 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.818538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.818553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.818573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.818585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.835331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.835378 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.835389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.835408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.835423 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.855856 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.860818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.860864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.860875 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.860895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.860906 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.873978 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.878933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.879024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.879041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.879061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.879078 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.894047 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.899023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.899087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.899098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.899120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.899131 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.912809 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.916950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.917005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.917016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.917032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.917042 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.933098 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:30 crc kubenswrapper[4895]: E1206 06:58:30.933235 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.935624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.935664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.935676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.935695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:30 crc kubenswrapper[4895]: I1206 06:58:30.935711 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:30Z","lastTransitionTime":"2025-12-06T06:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.039120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.039187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.039203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.039224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.039237 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.049565 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.049565 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:31 crc kubenswrapper[4895]: E1206 06:58:31.049716 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:31 crc kubenswrapper[4895]: E1206 06:58:31.049776 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.049590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:31 crc kubenswrapper[4895]: E1206 06:58:31.049886 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.141588 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.141628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.141640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.141655 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.141667 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.245152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.245201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.245210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.245225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.245236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.348583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.348645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.348657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.348677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.348694 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.450580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.450617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.450628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.450645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.450656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.553304 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.553349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.553362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.553381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.553394 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.656347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.656404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.656415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.656436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.656450 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.759611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.759659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.759670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.759687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.759698 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.863175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.863228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.863238 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.863258 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.863270 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.967000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.967043 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.967054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.967071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:31 crc kubenswrapper[4895]: I1206 06:58:31.967082 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:31Z","lastTransitionTime":"2025-12-06T06:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.050385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:32 crc kubenswrapper[4895]: E1206 06:58:32.050674 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.065751 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.070193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.070254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.070271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.070292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.070306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.173320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.173362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.173372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.173390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.173403 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.276283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.276365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.276379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.276401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.276422 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.379361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.379398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.379407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.379421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.379432 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.482063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.482112 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.482122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.482138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.482148 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.584868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.584908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.584919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.584934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.584945 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.687661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.687742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.687760 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.687788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.687807 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.792199 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.792291 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.792318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.792355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.792382 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.895731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.895805 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.895818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.895840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.895854 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.998903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.998993 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.999007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.999031 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:32 crc kubenswrapper[4895]: I1206 06:58:32.999045 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:32Z","lastTransitionTime":"2025-12-06T06:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.049749 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.049790 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.049755 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:33 crc kubenswrapper[4895]: E1206 06:58:33.049909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:33 crc kubenswrapper[4895]: E1206 06:58:33.049938 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:33 crc kubenswrapper[4895]: E1206 06:58:33.050277 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.050597 4895 scope.go:117] "RemoveContainer" containerID="6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.102057 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.102129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.102142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.102163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.102178 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.205506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.205558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.205583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.205606 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.205622 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.309498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.309554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.309564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.309581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.309592 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.412462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.412592 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.412618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.412654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.412678 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.515546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.515591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.515600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.515617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.515632 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.618208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.618295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.618319 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.618354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.618375 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.678036 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/2.log" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.722724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.722768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.722781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.722797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.722808 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.825175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.825207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.825217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.825232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.825241 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.928142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.928191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.928204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.928222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:33 crc kubenswrapper[4895]: I1206 06:58:33.928236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:33Z","lastTransitionTime":"2025-12-06T06:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.030810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.030864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.030876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.030892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.030903 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.050464 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:34 crc kubenswrapper[4895]: E1206 06:58:34.050670 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.134136 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.134189 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.134201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.134221 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.134236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.240970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.241011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.241021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.241041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.241051 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.343839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.343873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.343882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.343896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.343906 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.446446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.446491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.446499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.446514 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.446522 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.549650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.549719 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.549739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.549772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.549793 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.652281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.652361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.652393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.652540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.652563 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.688936 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/2.log" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.692238 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.694214 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.705219 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.727390 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.738740 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.752173 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.755065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.755096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.755111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.755128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.755139 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.764011 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.788878 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.859816 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.859862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.859874 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.859892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.859905 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.867840 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.880653 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.891972 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.904282 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0535dd9e-25e6-4df6-8daa-4170e90b13da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e162f2424f5aace98562e01fdfaf5814324165467190d076a3bc9d4edcbdbfbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.922259 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.936331 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.951322 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.962327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.962370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.962380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.962395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.962405 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:34Z","lastTransitionTime":"2025-12-06T06:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.964756 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.978176 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:34 crc kubenswrapper[4895]: I1206 06:58:34.991079 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.006433 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.016009 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.050504 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.050576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.050504 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:35 crc kubenswrapper[4895]: E1206 06:58:35.050668 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:35 crc kubenswrapper[4895]: E1206 06:58:35.050752 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:35 crc kubenswrapper[4895]: E1206 06:58:35.050848 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.065393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.065435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.065446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.065460 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.065487 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.168450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.168519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.168536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.168553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.168562 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.270384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.270425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.270438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.270455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.270466 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.374115 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.374173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.374184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.374202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.374215 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.539675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.539734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.539753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.539777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.539796 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.642624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.642908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.643017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.643095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.643154 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.746589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.746690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.747093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.747400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.747708 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.851618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.851664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.851674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.851694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.851705 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.954208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.954241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.954252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.954268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:35 crc kubenswrapper[4895]: I1206 06:58:35.954279 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:35Z","lastTransitionTime":"2025-12-06T06:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.050160 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:36 crc kubenswrapper[4895]: E1206 06:58:36.050325 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.056243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.056285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.056295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.056312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.056322 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.159845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.159908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.159929 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.159952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.159970 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.264064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.264129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.264146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.264171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.264190 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.366876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.366932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.366947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.366967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.366982 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.470207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.470279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.470289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.470307 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.470320 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.573552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.573600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.573612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.573633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.573647 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.677303 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.677377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.677413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.677434 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.677450 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.702585 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/3.log" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.703660 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/2.log" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.707005 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" exitCode=1 Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.707072 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.707146 4895 scope.go:117] "RemoveContainer" containerID="6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.708111 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 06:58:36 crc kubenswrapper[4895]: E1206 06:58:36.708319 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.731214 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.746380 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.764423 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.780787 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.781810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.781832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.781841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.781857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.781869 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.794346 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.808121 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.820830 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.834228 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.848790 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.861196 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.871812 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.883793 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.885452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.885524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.885546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.885570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.885752 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.898097 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.915998 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.930813 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.947823 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.964315 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0535dd9e-25e6-4df6-8daa-4170e90b13da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e162f2424f5aace98562e01fdfaf5814324165467190d076a3bc9d4edcbdbfbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.986779 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:36Z\\\",\\\"message\\\":\\\"Ranges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:58:36.008999 6846 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:35Z is after 2025-08-24T17:21:41Z]\\\\nI1206 06:58:36.009018 6846 lb_config.go:1031] Cluster endpoints for openshift-kube-apis\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.988451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.988524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.988905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.988940 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:36 crc kubenswrapper[4895]: I1206 06:58:36.988955 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:36Z","lastTransitionTime":"2025-12-06T06:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.050042 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.050084 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.050159 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:37 crc kubenswrapper[4895]: E1206 06:58:37.050196 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:37 crc kubenswrapper[4895]: E1206 06:58:37.050299 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:37 crc kubenswrapper[4895]: E1206 06:58:37.050538 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.092337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.092371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.092384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.092400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.092412 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.195801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.195867 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.195880 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.195902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.195914 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.298288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.298337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.298351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.298369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.298381 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.401820 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.401897 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.401914 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.401942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.401961 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.504754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.504844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.504856 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.504873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.504884 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.607957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.608016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.608024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.608040 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.608050 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.710446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.710516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.710531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.710552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.710568 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.713902 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/3.log" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.814059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.814107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.814118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.814139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.814151 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.921241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.921301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.921322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.921342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:37 crc kubenswrapper[4895]: I1206 06:58:37.922017 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:37Z","lastTransitionTime":"2025-12-06T06:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.024811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.024877 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.024891 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.024912 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.024926 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.050597 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:38 crc kubenswrapper[4895]: E1206 06:58:38.050818 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.065605 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.066415 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.081322 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.096213 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.108157 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.122659 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.127297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.127358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.127368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.127388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.127401 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.136217 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.151322 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.165593 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.181004 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.195216 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.209586 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.221135 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.230128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.230191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.230207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.230228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.230242 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.234320 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.247205 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.257432 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.267737 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.285996 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:36Z\\\",\\\"message\\\":\\\"Ranges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:58:36.008999 6846 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:35Z is after 2025-08-24T17:21:41Z]\\\\nI1206 06:58:36.009018 6846 lb_config.go:1031] Cluster endpoints for openshift-kube-apis\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.297762 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0535dd9e-25e6-4df6-8daa-4170e90b13da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e162f2424f5aace98562e01fdfaf5814324165467190d076a3bc9d4edcbdbfbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.332960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.333046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.333063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.333091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.333109 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.435874 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.435917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.435928 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.435945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.435956 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.538371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.538737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.538855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.538975 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.539152 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.642870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.643265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.643354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.643443 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.643550 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.746883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.746939 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.746954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.746976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.746992 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.849846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.850267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.850430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.850643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.850788 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.954080 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.954132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.954145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.954163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:38 crc kubenswrapper[4895]: I1206 06:58:38.954179 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:38Z","lastTransitionTime":"2025-12-06T06:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.050261 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.050330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:39 crc kubenswrapper[4895]: E1206 06:58:39.050809 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.050374 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:39 crc kubenswrapper[4895]: E1206 06:58:39.051247 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:39 crc kubenswrapper[4895]: E1206 06:58:39.050952 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.057104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.057138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.057145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.057159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.057168 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.159694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.159731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.159743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.159763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.159778 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.263404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.263498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.263518 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.263741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.263762 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.366887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.366924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.366935 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.366954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.366965 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.470813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.470867 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.470883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.470906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.470923 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.574197 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.574246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.574257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.574274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.574287 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.677600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.677638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.677649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.677665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.677676 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.780427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.780814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.780954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.781109 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.781203 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.884203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.884254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.884271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.884291 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.884306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.987432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.987536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.987552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.987574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:39 crc kubenswrapper[4895]: I1206 06:58:39.987590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:39Z","lastTransitionTime":"2025-12-06T06:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.050398 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.050889 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.089985 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.090048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.090064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.090086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.090098 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.193101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.193736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.193824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.193890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.193958 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.297328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.297374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.297388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.297439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.297457 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.401075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.401342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.401451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.401577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.401656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.504706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.504748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.504759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.504777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.504788 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.607077 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.607118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.607126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.607141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.607149 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.710285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.710329 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.710337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.710351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.710361 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.807009 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.807110 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.807205 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.807283 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.807265317 +0000 UTC m=+147.208654187 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.807297 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.807418 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.807392871 +0000 UTC m=+147.208781741 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.812890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.812944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.812957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.812977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.812993 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.908688 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.908913 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.908990 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.908940151 +0000 UTC m=+147.310329061 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.909089 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909117 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909146 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909170 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909260 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.909231439 +0000 UTC m=+147.310620339 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909346 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909392 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909407 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:40 crc kubenswrapper[4895]: E1206 06:58:40.909506 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.909486016 +0000 UTC m=+147.310874886 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.915416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.915484 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.915498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.915520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:40 crc kubenswrapper[4895]: I1206 06:58:40.915535 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:40Z","lastTransitionTime":"2025-12-06T06:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.018607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.018661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.018675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.018694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.018707 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.049552 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.049552 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.049748 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.049576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.049929 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.050156 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.051457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.051534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.051547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.051564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.051575 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.070053 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.077927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.077972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.077983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.078001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.078015 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.096391 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.101309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.101351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.101362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.101379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.101389 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.118986 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.123811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.123874 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.123892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.123915 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.123930 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.138311 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.142731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.142777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.142788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.142806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.142818 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.157398 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7d2d861-d143-4cb9-9f6f-a839095839a4\\\",\\\"systemUUID\\\":\\\"01a4a5d1-647a-48ea-98ed-826c2f6d4911\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:41Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:41 crc kubenswrapper[4895]: E1206 06:58:41.157605 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.159355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.159389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.159400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.159414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.159426 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.262528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.262594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.262612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.262679 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.262696 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.366264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.366325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.366339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.366359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.366373 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.469095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.469139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.469149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.469170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.469181 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.572348 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.572435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.572448 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.572513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.572534 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.675576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.675626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.675635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.675652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.675662 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.779119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.779174 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.779186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.779206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.779220 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.883271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.883312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.883320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.883337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.883348 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.985432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.985489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.985509 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.985530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:41 crc kubenswrapper[4895]: I1206 06:58:41.985546 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:41Z","lastTransitionTime":"2025-12-06T06:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.050706 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:42 crc kubenswrapper[4895]: E1206 06:58:42.050886 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.087531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.087590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.087611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.087642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.087660 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.192013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.192059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.192069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.192085 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.192095 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.295063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.295113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.295122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.295139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.295150 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.399618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.400258 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.400277 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.400299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.400313 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.503400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.503457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.503499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.503524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.503539 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.606603 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.606666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.606678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.606700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.606714 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.712815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.712890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.712905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.712929 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.712952 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.816083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.816120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.816129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.816142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.816153 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.920075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.920134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.920144 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.920167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:42 crc kubenswrapper[4895]: I1206 06:58:42.920178 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:42Z","lastTransitionTime":"2025-12-06T06:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.023316 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.023361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.023370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.023388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.023400 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.050508 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.050624 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:43 crc kubenswrapper[4895]: E1206 06:58:43.050693 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.050624 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:43 crc kubenswrapper[4895]: E1206 06:58:43.050867 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:43 crc kubenswrapper[4895]: E1206 06:58:43.051063 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.125676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.125726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.125741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.125761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.125773 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.228642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.228692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.228706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.228724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.228737 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.331811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.331875 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.331894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.331923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.331942 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.435240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.435279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.435292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.435310 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.435321 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.538579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.538632 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.538645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.538667 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.538679 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.641580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.641620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.641629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.641646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.641657 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.744558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.744632 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.744650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.744678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.744697 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.847683 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.847746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.847759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.847781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.847803 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.950689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.950746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.950759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.950777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:43 crc kubenswrapper[4895]: I1206 06:58:43.950789 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:43Z","lastTransitionTime":"2025-12-06T06:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.049920 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:44 crc kubenswrapper[4895]: E1206 06:58:44.050188 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.054378 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.054431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.054451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.054498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.054522 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.158511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.158580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.158595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.158613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.158627 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.261205 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.261274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.261286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.261308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.261341 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.364450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.364534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.364549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.364572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.364584 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.472358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.472438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.472451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.472514 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.472531 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.575649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.575697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.575708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.575733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.575744 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.678498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.678560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.678570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.678597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.678610 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.781490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.781537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.781547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.781564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.781575 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.885152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.885218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.885236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.885264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.885283 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.988654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.988705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.988722 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.988747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:44 crc kubenswrapper[4895]: I1206 06:58:44.988765 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:44Z","lastTransitionTime":"2025-12-06T06:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.049629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.049701 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.049705 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:45 crc kubenswrapper[4895]: E1206 06:58:45.049827 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:45 crc kubenswrapper[4895]: E1206 06:58:45.049933 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:45 crc kubenswrapper[4895]: E1206 06:58:45.050050 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.092171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.092218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.092247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.092325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.092338 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.195201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.195256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.195267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.195285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.195297 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.299695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.299751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.299766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.299787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.299800 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.402555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.402612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.402626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.402655 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.402671 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.505237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.505280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.505293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.505312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.505325 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.608075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.608246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.608292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.608314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.608327 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.711243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.711337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.711350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.711370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.711384 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.814030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.814072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.814084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.814103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.814116 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.916528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.916594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.916605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.916668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:45 crc kubenswrapper[4895]: I1206 06:58:45.916682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:45Z","lastTransitionTime":"2025-12-06T06:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.034267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.034329 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.034345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.034364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.034377 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.049902 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:46 crc kubenswrapper[4895]: E1206 06:58:46.050051 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.137358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.137424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.137435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.137455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.137466 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.240813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.240852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.240861 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.240878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.240889 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.343711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.343795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.343813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.343846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.343866 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.447200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.447414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.447436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.447462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.447502 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.550716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.550785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.550804 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.550830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.550900 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.654383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.654455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.654521 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.654549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.654611 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.758265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.758339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.758350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.758366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.758380 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.862107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.862182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.862209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.862242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.862265 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.965416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.965546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.965557 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.965574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:46 crc kubenswrapper[4895]: I1206 06:58:46.965588 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:46Z","lastTransitionTime":"2025-12-06T06:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.050256 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.050444 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:47 crc kubenswrapper[4895]: E1206 06:58:47.050511 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.050256 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:47 crc kubenswrapper[4895]: E1206 06:58:47.050712 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:47 crc kubenswrapper[4895]: E1206 06:58:47.050869 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.068769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.068815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.068823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.068843 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.068854 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.172348 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.172410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.172424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.172442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.172455 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.276195 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.276247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.276262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.276281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.276295 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.379074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.379128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.379138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.379153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.379165 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.482595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.482641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.482653 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.482671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.482683 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.585579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.585635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.585651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.585715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.585732 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.688492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.688544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.688555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.688569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.688583 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.791610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.791673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.791697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.791717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.791730 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.894960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.895039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.895052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.895072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.895087 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.999301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.999381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.999526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.999550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:47 crc kubenswrapper[4895]: I1206 06:58:47.999596 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:47Z","lastTransitionTime":"2025-12-06T06:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.050607 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:48 crc kubenswrapper[4895]: E1206 06:58:48.050809 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.051821 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 06:58:48 crc kubenswrapper[4895]: E1206 06:58:48.052037 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.064413 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.077106 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.098073 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6506234292eb2565252d996817fab836e0b511fec26c7b31f49989493926d0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:07Z\\\",\\\"message\\\":\\\"206 06:58:06.464358 6494 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nI1206 06:58:06.464380 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 06:58:06.464384 6494 services_controller.go:360] Finished syncing service redhat-operators on namespace openshift-marketplace for network=default : 2.499668ms\\\\nI1206 06:58:06.464415 6494 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nF1206 06:58:06.464442 6494 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:36Z\\\",\\\"message\\\":\\\"Ranges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:58:36.008999 6846 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:35Z is after 2025-08-24T17:21:41Z]\\\\nI1206 06:58:36.009018 6846 lb_config.go:1031] Cluster endpoints for openshift-kube-apis\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.102091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.102129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.102141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.102160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.102172 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.111887 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0535dd9e-25e6-4df6-8daa-4170e90b13da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e162f2424f5aace98562e01fdfaf5814324165467190d076a3bc9d4edcbdbfbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.128259 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.146582 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.162505 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.180145 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.198050 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.204515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.204910 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.205064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.205349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.205534 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.212032 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.226162 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.247391 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e418f9f-b451-452c-b36d-a61447b274b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d232e0a4d8cb2ec0eb59e404552737943a7db9341f21e9189673eab3ab98847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e216bfcdfb9b399d962c38823a40d4241971f091acb202760cc988884b0f9b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1ed547ff9f85c0e6a122f893e6336b9063767c3505abbf6c71908abb004882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10cd7dd956ad8c4e4335c1ff7b3bc563fcb1533d4068f33cd9e276820a900e89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872c7ea63fb864424ff0ec1d2e2094117808dfebbfdbab862bb16f595b14446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325d9a4da8dfed1c9196761de852b033e5e30e08b9364f8e91230dacd9e38bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://325d9a4da8dfed1c9196761de852b033e5e30e08b9364f8e91230dacd9e38bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29aab4bcd9edb89eaa1f8948b9eefe9dda058f8df8a390dfd810483ec238df88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29aab4bcd9edb89eaa1f8948b9eefe9dda058f8df8a390dfd810483ec238df88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c25e995dd05f11f6532ccefe3caf73171fe986daf46dfa022f96b645f71e5037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25e995dd05f11f6532ccefe3caf73171fe986daf46dfa022f96b645f71e5037\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.263657 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.277207 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.289962 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.302913 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.308693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.308740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.308752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.308770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.308784 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.316319 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.329156 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.342513 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.355527 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de35e7551764b6ee2f78550af2d80a12079bf258e0f49e46f0a297744837b054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f16ff35b90f20e0ae19bbeb595d4d1b4268955f70729195bb4cc4adbf99b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.365950 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.377597 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k86k4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f42fc6-54ce-4f49-adbd-545e02a1f322\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:25Z\\\",\\\"message\\\":\\\"2025-12-06T06:57:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca\\\\n2025-12-06T06:57:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415171ab-0551-4fac-9f8d-408dfd914aca to /host/opt/cni/bin/\\\\n2025-12-06T06:57:40Z [verbose] multus-daemon started\\\\n2025-12-06T06:57:40Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zjtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k86k4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.392822 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b85a3f9-a505-4331-a3e2-08a6211defcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://460405ae2639b01b8eb379703adc731d0b5e246e81e86e415d02442577f0505a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6daefa0ba6e85f6a92ae3983886f235bf237b7df9550bd1550ff1e9971ac9ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e1a47cf52d8366be90e854a7c37dfdbf56e9254fbd3fd2b679b2027e063c38f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf99a114ddb182068436eb044e1b03c8b97c9fba2b365a97322315bb9ad6416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00092c3140130c6c5cbd8248921f4a5012f3d280554edeae0f8d08aeb230e14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c357e0ead52ada65f88b72dfe167a8e5955bf15586b9f5bb1df29a7f8d58fd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b02825d830382256623ac0e005e5b8684e353680e47c2a1b6b53274ca40a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bss5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lgpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.403451 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9200f6d1-bc88-4065-9985-8c6a6387404f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d07e7dd185bc03167af8e3b9bb9c7226174ba21fff8ff211417cb5c0024808c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f272\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6k7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.411556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.411885 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.411997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.412106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.412197 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.433975 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.460397 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.481446 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d0ed585-fa5f-4661-a7fd-69084df17bd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:57:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:57:30.993583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:57:31.007575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072946802/tls.crt::/tmp/serving-cert-3072946802/tls.key\\\\\\\"\\\\nI1206 06:57:36.427207 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:57:36.429427 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:57:36.429446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:57:36.429488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:57:36.429494 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:57:36.434158 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1206 06:57:36.434167 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:57:36.434204 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434209 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:57:36.434214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:57:36.434217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:57:36.434219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:57:36.434222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 06:57:36.437806 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.498238 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12095c85-55d1-4189-a302-cb65611ed3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6f6be285b3c2c6e0d09ebc139c2fd86a1fec6282e52a1a9967a73a07d55167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d1ce095a90d1528dae60f33ff18eb6a574eb9e74083adb9018b416ba08b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b0ff439fd1fd97b83f83dadfcead5a6f8542b609a15ca3bc6e6d4c063b59b4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.513600 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8068c0e2f4319bf09b841ed887a1c93cd487db23edc467edad2bc74f1975510b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.515513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.515559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.515571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.515586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.515596 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.527375 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ab071ea57159a26d2416078cbc90c6c759540f7543172d4205d171f9b99e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.540901 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hdgqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138ef400-4714-4742-ae94-ea6b8afd73d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408af5f00bd40cc829c0a073ed25190f47ee0e332dd763d19d89204b9191699a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hdgqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.555152 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dzrsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.569828 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f701b8-abef-4aa6-9bd6-5145dfdcb828\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:58:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61a8968ba8ec65c4a7bedb447f5292df7a5ab45942b85fa1822e4e65ec52ec8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efbbcf0eea4b447617e23045452a9c0a6181844c165be87b788880690806dd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39aac663c8c8f0cac451e9bbbca0f7fff810268e4e7981c70b23fbdd96f7ebb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5590a6853fd6f29adb549bd5959abb84f0284a1bc28467a5fd3243da6dc5867\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.594285 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e418f9f-b451-452c-b36d-a61447b274b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d232e0a4d8cb2ec0eb59e404552737943a7db9341f21e9189673eab3ab98847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e216bfcdfb9b399d962c38823a40d4241971f091acb202760cc988884b0f9b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1ed547ff9f85c0e6a122f893e6336b9063767c3505abbf6c71908abb004882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10cd7dd956ad8c4e4335c1ff7b3bc563fcb1533d4068f33cd9e276820a900e89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872c7ea63fb864424ff0ec1d2e2094117808dfebbfdbab862bb16f595b14446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325d9a4da8dfed1c9196761de852b033e5e30e08b9364f8e91230dacd9e38bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://325d9a4da8dfed1c9196761de852b033e5e30e08b9364f8e91230dacd9e38bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29aab4bcd9edb89eaa1f8948b9eefe9dda058f8df8a390dfd810483ec238df88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29aab4bcd9edb89eaa1f8948b9eefe9dda058f8df8a390dfd810483ec238df88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c25e995dd05f11f6532ccefe3caf73171fe986daf46dfa022f96b645f71e5037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25e995dd05f11f6532ccefe3caf73171fe986daf46dfa022f96b645f71e5037\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.607614 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vtdvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d2795a3-ed84-4f0b-828b-251d2e503864\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dc2c6df34dd93a52f15c6ecfc9af123c90d6617c62607eb5e3b909070c69256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4gn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vtdvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.618238 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.618347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.618365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.618449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.618467 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.620002 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849b6dd3-d8b0-4dc5-bf61-37c6ce394cdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677b3ec4e606af7de5469200325f6ead8d16bb69c4a6b80f72a992fbae90999f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b6a4ce8e19f3f4c7229a02fa2870fd55c2549af3834e848079146ba809520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmw74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qq8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.631113 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0535dd9e-25e6-4df6-8daa-4170e90b13da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e162f2424f5aace98562e01fdfaf5814324165467190d076a3bc9d4edcbdbfbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094405f9ea74cbd78cdd8c0c9fbd3297ab1f79a02436543f47114cf9cea5b639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.652521 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9690808-de36-4960-8286-7079c78c491b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:58:36Z\\\",\\\"message\\\":\\\"Ranges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:58:36.008999 6846 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:35Z is after 2025-08-24T17:21:41Z]\\\\nI1206 06:58:36.009018 6846 lb_config.go:1031] Cluster endpoints for openshift-kube-apis\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:58:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkc5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:57:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mhcxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:58:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.721319 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.721364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.721374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.721394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.721408 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.824736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.824795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.824815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.825018 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.825031 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.927973 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.928034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.928068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.928106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:48 crc kubenswrapper[4895]: I1206 06:58:48.928131 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:48Z","lastTransitionTime":"2025-12-06T06:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.031698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.031752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.031764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.031785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.031797 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.050341 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.050341 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.050348 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:49 crc kubenswrapper[4895]: E1206 06:58:49.050645 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:49 crc kubenswrapper[4895]: E1206 06:58:49.050748 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:49 crc kubenswrapper[4895]: E1206 06:58:49.050863 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.134789 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.134823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.134831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.134846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.134856 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.238009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.238055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.238066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.238082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.238094 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.340676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.340730 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.340742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.340764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.340777 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.443177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.443241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.443256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.443282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.443296 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.546401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.547016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.547055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.547089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.547114 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.654328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.654427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.654446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.655119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.655742 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.759400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.760063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.760092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.760206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.760394 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.862539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.862573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.862581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.862597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.862609 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.964692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.964724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.964732 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.964745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:49 crc kubenswrapper[4895]: I1206 06:58:49.964754 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:49Z","lastTransitionTime":"2025-12-06T06:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.050150 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:50 crc kubenswrapper[4895]: E1206 06:58:50.050324 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.066939 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.066978 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.066988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.067001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.067014 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.169575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.169623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.169631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.169645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.169655 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.272397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.272456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.272541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.272590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.272619 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.376016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.376062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.376072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.376096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.376112 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.478581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.478636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.478646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.478668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.478682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.582526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.582572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.582582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.582601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.582613 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.686276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.686347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.686369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.686418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.686451 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.789936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.790001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.790010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.790028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.790038 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.892626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.892676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.892689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.892707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.892718 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.995818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.995871 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.995881 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.995896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:50 crc kubenswrapper[4895]: I1206 06:58:50.995906 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:50Z","lastTransitionTime":"2025-12-06T06:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.050108 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.050175 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.050131 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:51 crc kubenswrapper[4895]: E1206 06:58:51.050328 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:51 crc kubenswrapper[4895]: E1206 06:58:51.050450 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:51 crc kubenswrapper[4895]: E1206 06:58:51.050614 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.099007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.099061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.099072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.099093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.099108 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:51Z","lastTransitionTime":"2025-12-06T06:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.167582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.167669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.167744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.167773 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.167791 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:51Z","lastTransitionTime":"2025-12-06T06:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.204306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.204349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.204358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.204372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.204382 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:58:51Z","lastTransitionTime":"2025-12-06T06:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.243554 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf"] Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.244256 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.247330 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.247736 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.248216 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.249971 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.262878 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vtdvq" podStartSLOduration=74.262845053 podStartE2EDuration="1m14.262845053s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.262603675 +0000 UTC m=+93.663992575" watchObservedRunningTime="2025-12-06 06:58:51.262845053 +0000 UTC m=+93.664233973" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.282594 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qq8v" podStartSLOduration=73.282555693 podStartE2EDuration="1m13.282555693s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.282545743 +0000 UTC m=+93.683934623" watchObservedRunningTime="2025-12-06 06:58:51.282555693 +0000 UTC m=+93.683944603" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.330769 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.330743541 podStartE2EDuration="19.330743541s" podCreationTimestamp="2025-12-06 06:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.303588872 +0000 UTC m=+93.704977832" watchObservedRunningTime="2025-12-06 06:58:51.330743541 +0000 UTC m=+93.732132421" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.341967 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88d0ae6a-9671-4d37-9037-285ddecd89b4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.342088 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d0ae6a-9671-4d37-9037-285ddecd89b4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.342160 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88d0ae6a-9671-4d37-9037-285ddecd89b4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.342266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88d0ae6a-9671-4d37-9037-285ddecd89b4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.342311 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88d0ae6a-9671-4d37-9037-285ddecd89b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.374095 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-k86k4" podStartSLOduration=74.374067586 podStartE2EDuration="1m14.374067586s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.372016616 +0000 UTC m=+93.773405496" watchObservedRunningTime="2025-12-06 06:58:51.374067586 +0000 UTC m=+93.775456466" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.403299 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lgpv5" podStartSLOduration=74.403274586 podStartE2EDuration="1m14.403274586s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.39863021 +0000 UTC m=+93.800019100" watchObservedRunningTime="2025-12-06 06:58:51.403274586 +0000 UTC m=+93.804663466" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.437851 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podStartSLOduration=74.437830864 podStartE2EDuration="1m14.437830864s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.421866343 +0000 UTC m=+93.823255223" watchObservedRunningTime="2025-12-06 06:58:51.437830864 +0000 UTC m=+93.839219744" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.442963 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d0ae6a-9671-4d37-9037-285ddecd89b4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.443010 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88d0ae6a-9671-4d37-9037-285ddecd89b4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.443086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88d0ae6a-9671-4d37-9037-285ddecd89b4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.443110 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88d0ae6a-9671-4d37-9037-285ddecd89b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.443142 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88d0ae6a-9671-4d37-9037-285ddecd89b4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.443185 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88d0ae6a-9671-4d37-9037-285ddecd89b4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.443245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88d0ae6a-9671-4d37-9037-285ddecd89b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.444440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88d0ae6a-9671-4d37-9037-285ddecd89b4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.450959 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d0ae6a-9671-4d37-9037-285ddecd89b4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.474492 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88d0ae6a-9671-4d37-9037-285ddecd89b4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cztgf\" (UID: \"88d0ae6a-9671-4d37-9037-285ddecd89b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.529367 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hdgqw" podStartSLOduration=74.529341067 podStartE2EDuration="1m14.529341067s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.529288476 +0000 UTC m=+93.930677416" watchObservedRunningTime="2025-12-06 06:58:51.529341067 +0000 UTC m=+93.930729937" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.558267 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.558244258 podStartE2EDuration="42.558244258s" podCreationTimestamp="2025-12-06 06:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.558012521 +0000 UTC m=+93.959401391" watchObservedRunningTime="2025-12-06 06:58:51.558244258 +0000 UTC m=+93.959633138" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.568340 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.583050 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=13.583030967 podStartE2EDuration="13.583030967s" podCreationTimestamp="2025-12-06 06:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.582198893 +0000 UTC m=+93.983587773" watchObservedRunningTime="2025-12-06 06:58:51.583030967 +0000 UTC m=+93.984419857" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.606980 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=74.606957291 podStartE2EDuration="1m14.606957291s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.603021756 +0000 UTC m=+94.004410636" watchObservedRunningTime="2025-12-06 06:58:51.606957291 +0000 UTC m=+94.008346161" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.621447 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.621424598 podStartE2EDuration="1m11.621424598s" podCreationTimestamp="2025-12-06 06:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.621096598 +0000 UTC m=+94.022485488" watchObservedRunningTime="2025-12-06 06:58:51.621424598 +0000 UTC m=+94.022813468" Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.769767 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" event={"ID":"88d0ae6a-9671-4d37-9037-285ddecd89b4","Type":"ContainerStarted","Data":"914b85d44a192978f9027d692c865e095d8305cd4456100c5691a46769d2f96d"} Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.769856 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" event={"ID":"88d0ae6a-9671-4d37-9037-285ddecd89b4","Type":"ContainerStarted","Data":"3a529b0dbf422948292f0610e4766686f17dddfc236dab0fd9c519e874475eae"} Dec 06 06:58:51 crc kubenswrapper[4895]: I1206 06:58:51.784348 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cztgf" podStartSLOduration=74.784308042 podStartE2EDuration="1m14.784308042s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:58:51.78421056 +0000 UTC m=+94.185599450" watchObservedRunningTime="2025-12-06 06:58:51.784308042 +0000 UTC m=+94.185696912" Dec 06 06:58:52 crc kubenswrapper[4895]: I1206 06:58:52.050690 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:52 crc kubenswrapper[4895]: E1206 06:58:52.051033 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:53 crc kubenswrapper[4895]: I1206 06:58:53.050302 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:53 crc kubenswrapper[4895]: I1206 06:58:53.050319 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:53 crc kubenswrapper[4895]: E1206 06:58:53.050794 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:53 crc kubenswrapper[4895]: I1206 06:58:53.050369 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:53 crc kubenswrapper[4895]: E1206 06:58:53.050991 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:53 crc kubenswrapper[4895]: E1206 06:58:53.051036 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:54 crc kubenswrapper[4895]: I1206 06:58:54.049825 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:54 crc kubenswrapper[4895]: E1206 06:58:54.050001 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:55 crc kubenswrapper[4895]: I1206 06:58:55.050003 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:55 crc kubenswrapper[4895]: I1206 06:58:55.050053 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:55 crc kubenswrapper[4895]: I1206 06:58:55.050073 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:55 crc kubenswrapper[4895]: E1206 06:58:55.050192 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:55 crc kubenswrapper[4895]: E1206 06:58:55.050290 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:55 crc kubenswrapper[4895]: E1206 06:58:55.050371 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:56 crc kubenswrapper[4895]: I1206 06:58:56.050567 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:56 crc kubenswrapper[4895]: E1206 06:58:56.050743 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:56 crc kubenswrapper[4895]: I1206 06:58:56.702043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:56 crc kubenswrapper[4895]: E1206 06:58:56.702197 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:56 crc kubenswrapper[4895]: E1206 06:58:56.702257 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs podName:2c72bd78-81d3-48dc-96c3-50bc6bac88d6 nodeName:}" failed. No retries permitted until 2025-12-06 07:00:00.702240899 +0000 UTC m=+163.103629769 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs") pod "network-metrics-daemon-dzrsj" (UID: "2c72bd78-81d3-48dc-96c3-50bc6bac88d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:58:57 crc kubenswrapper[4895]: I1206 06:58:57.050588 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:57 crc kubenswrapper[4895]: I1206 06:58:57.050617 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:57 crc kubenswrapper[4895]: I1206 06:58:57.050706 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:57 crc kubenswrapper[4895]: E1206 06:58:57.050803 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:57 crc kubenswrapper[4895]: E1206 06:58:57.050918 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:58:57 crc kubenswrapper[4895]: E1206 06:58:57.051043 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:58 crc kubenswrapper[4895]: I1206 06:58:58.050056 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:58:58 crc kubenswrapper[4895]: E1206 06:58:58.051174 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:58:59 crc kubenswrapper[4895]: I1206 06:58:59.050020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:58:59 crc kubenswrapper[4895]: I1206 06:58:59.050164 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:58:59 crc kubenswrapper[4895]: I1206 06:58:59.050164 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:58:59 crc kubenswrapper[4895]: E1206 06:58:59.050661 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:58:59 crc kubenswrapper[4895]: E1206 06:58:59.050795 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:58:59 crc kubenswrapper[4895]: E1206 06:58:59.050894 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:00 crc kubenswrapper[4895]: I1206 06:59:00.050301 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:00 crc kubenswrapper[4895]: E1206 06:59:00.051410 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:01 crc kubenswrapper[4895]: I1206 06:59:01.049900 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:01 crc kubenswrapper[4895]: I1206 06:59:01.049974 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:01 crc kubenswrapper[4895]: I1206 06:59:01.049918 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:01 crc kubenswrapper[4895]: E1206 06:59:01.050116 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:01 crc kubenswrapper[4895]: E1206 06:59:01.050261 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:01 crc kubenswrapper[4895]: E1206 06:59:01.050402 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:02 crc kubenswrapper[4895]: I1206 06:59:02.050365 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:02 crc kubenswrapper[4895]: E1206 06:59:02.050931 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:03 crc kubenswrapper[4895]: I1206 06:59:03.050334 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:03 crc kubenswrapper[4895]: E1206 06:59:03.050515 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:03 crc kubenswrapper[4895]: I1206 06:59:03.050615 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:03 crc kubenswrapper[4895]: I1206 06:59:03.050821 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:03 crc kubenswrapper[4895]: E1206 06:59:03.050873 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:03 crc kubenswrapper[4895]: E1206 06:59:03.051022 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:03 crc kubenswrapper[4895]: I1206 06:59:03.051214 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 06:59:03 crc kubenswrapper[4895]: E1206 06:59:03.051356 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:59:04 crc kubenswrapper[4895]: I1206 06:59:04.049903 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:04 crc kubenswrapper[4895]: E1206 06:59:04.050130 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:05 crc kubenswrapper[4895]: I1206 06:59:05.050612 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:05 crc kubenswrapper[4895]: I1206 06:59:05.050660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:05 crc kubenswrapper[4895]: I1206 06:59:05.050722 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:05 crc kubenswrapper[4895]: E1206 06:59:05.051222 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:05 crc kubenswrapper[4895]: E1206 06:59:05.051380 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:05 crc kubenswrapper[4895]: E1206 06:59:05.051454 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:06 crc kubenswrapper[4895]: I1206 06:59:06.049903 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:06 crc kubenswrapper[4895]: E1206 06:59:06.050154 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:07 crc kubenswrapper[4895]: I1206 06:59:07.049733 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:07 crc kubenswrapper[4895]: I1206 06:59:07.049781 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:07 crc kubenswrapper[4895]: I1206 06:59:07.049735 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:07 crc kubenswrapper[4895]: E1206 06:59:07.049872 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:07 crc kubenswrapper[4895]: E1206 06:59:07.050064 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:07 crc kubenswrapper[4895]: E1206 06:59:07.050394 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:08 crc kubenswrapper[4895]: I1206 06:59:08.050209 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:08 crc kubenswrapper[4895]: E1206 06:59:08.051331 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:09 crc kubenswrapper[4895]: I1206 06:59:09.049759 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:09 crc kubenswrapper[4895]: I1206 06:59:09.049782 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:09 crc kubenswrapper[4895]: E1206 06:59:09.049892 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:09 crc kubenswrapper[4895]: I1206 06:59:09.050079 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:09 crc kubenswrapper[4895]: E1206 06:59:09.050142 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:09 crc kubenswrapper[4895]: E1206 06:59:09.050296 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:10 crc kubenswrapper[4895]: I1206 06:59:10.050875 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:10 crc kubenswrapper[4895]: E1206 06:59:10.052574 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:10 crc kubenswrapper[4895]: I1206 06:59:10.051106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:10 crc kubenswrapper[4895]: E1206 06:59:10.053294 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:11 crc kubenswrapper[4895]: I1206 06:59:11.050215 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:11 crc kubenswrapper[4895]: I1206 06:59:11.050288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:11 crc kubenswrapper[4895]: E1206 06:59:11.051046 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:11 crc kubenswrapper[4895]: E1206 06:59:11.051836 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:12 crc kubenswrapper[4895]: I1206 06:59:12.049991 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:12 crc kubenswrapper[4895]: I1206 06:59:12.050023 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:12 crc kubenswrapper[4895]: E1206 06:59:12.050239 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:12 crc kubenswrapper[4895]: E1206 06:59:12.050276 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.050508 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.050594 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:13 crc kubenswrapper[4895]: E1206 06:59:13.050691 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:13 crc kubenswrapper[4895]: E1206 06:59:13.050804 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.858803 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/1.log" Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.859228 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/0.log" Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.859273 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1f42fc6-54ce-4f49-adbd-545e02a1f322" containerID="b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785" exitCode=1 Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.859323 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerDied","Data":"b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785"} Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.859391 4895 scope.go:117] "RemoveContainer" containerID="9d8bbaed5b2d136777d17bbff78787d19e6b1e63480718104d216b962cf952a4" Dec 06 06:59:13 crc kubenswrapper[4895]: I1206 06:59:13.859837 4895 scope.go:117] "RemoveContainer" containerID="b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785" Dec 06 06:59:13 crc kubenswrapper[4895]: E1206 06:59:13.859992 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-k86k4_openshift-multus(e1f42fc6-54ce-4f49-adbd-545e02a1f322)\"" pod="openshift-multus/multus-k86k4" podUID="e1f42fc6-54ce-4f49-adbd-545e02a1f322" Dec 06 06:59:14 crc kubenswrapper[4895]: I1206 06:59:14.051036 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:14 crc kubenswrapper[4895]: I1206 06:59:14.051178 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:14 crc kubenswrapper[4895]: E1206 06:59:14.051287 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:14 crc kubenswrapper[4895]: E1206 06:59:14.051549 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:14 crc kubenswrapper[4895]: I1206 06:59:14.051637 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 06:59:14 crc kubenswrapper[4895]: E1206 06:59:14.052249 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mhcxk_openshift-ovn-kubernetes(c9690808-de36-4960-8286-7079c78c491b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" Dec 06 06:59:14 crc kubenswrapper[4895]: I1206 06:59:14.866239 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/1.log" Dec 06 06:59:15 crc kubenswrapper[4895]: I1206 06:59:15.050616 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:15 crc kubenswrapper[4895]: I1206 06:59:15.050749 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:15 crc kubenswrapper[4895]: E1206 06:59:15.050831 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:15 crc kubenswrapper[4895]: E1206 06:59:15.050982 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:16 crc kubenswrapper[4895]: I1206 06:59:16.050331 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:16 crc kubenswrapper[4895]: I1206 06:59:16.050445 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:16 crc kubenswrapper[4895]: E1206 06:59:16.050516 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:16 crc kubenswrapper[4895]: E1206 06:59:16.050673 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:17 crc kubenswrapper[4895]: I1206 06:59:17.050005 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:17 crc kubenswrapper[4895]: I1206 06:59:17.050039 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:17 crc kubenswrapper[4895]: E1206 06:59:17.050172 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:17 crc kubenswrapper[4895]: E1206 06:59:17.050312 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:18 crc kubenswrapper[4895]: E1206 06:59:18.019463 4895 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 06 06:59:18 crc kubenswrapper[4895]: I1206 06:59:18.049590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:18 crc kubenswrapper[4895]: I1206 06:59:18.049602 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:18 crc kubenswrapper[4895]: E1206 06:59:18.051286 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:18 crc kubenswrapper[4895]: E1206 06:59:18.051205 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:18 crc kubenswrapper[4895]: E1206 06:59:18.136431 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:59:19 crc kubenswrapper[4895]: I1206 06:59:19.049952 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:19 crc kubenswrapper[4895]: I1206 06:59:19.049984 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:19 crc kubenswrapper[4895]: E1206 06:59:19.050349 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:19 crc kubenswrapper[4895]: E1206 06:59:19.051052 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:20 crc kubenswrapper[4895]: I1206 06:59:20.050668 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:20 crc kubenswrapper[4895]: I1206 06:59:20.050764 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:20 crc kubenswrapper[4895]: E1206 06:59:20.050939 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:20 crc kubenswrapper[4895]: E1206 06:59:20.051146 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:21 crc kubenswrapper[4895]: I1206 06:59:21.050009 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:21 crc kubenswrapper[4895]: I1206 06:59:21.050032 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:21 crc kubenswrapper[4895]: E1206 06:59:21.050242 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:21 crc kubenswrapper[4895]: E1206 06:59:21.050848 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:22 crc kubenswrapper[4895]: I1206 06:59:22.050397 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:22 crc kubenswrapper[4895]: E1206 06:59:22.050701 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:22 crc kubenswrapper[4895]: I1206 06:59:22.050423 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:22 crc kubenswrapper[4895]: E1206 06:59:22.050862 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:23 crc kubenswrapper[4895]: I1206 06:59:23.049936 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:23 crc kubenswrapper[4895]: I1206 06:59:23.050011 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:23 crc kubenswrapper[4895]: E1206 06:59:23.050112 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:23 crc kubenswrapper[4895]: E1206 06:59:23.050320 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:23 crc kubenswrapper[4895]: E1206 06:59:23.138565 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:59:24 crc kubenswrapper[4895]: I1206 06:59:24.051848 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:24 crc kubenswrapper[4895]: E1206 06:59:24.051969 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:24 crc kubenswrapper[4895]: I1206 06:59:24.052122 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:24 crc kubenswrapper[4895]: E1206 06:59:24.052168 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:25 crc kubenswrapper[4895]: I1206 06:59:25.050278 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:25 crc kubenswrapper[4895]: I1206 06:59:25.050385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:25 crc kubenswrapper[4895]: E1206 06:59:25.050617 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:25 crc kubenswrapper[4895]: E1206 06:59:25.050714 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:25 crc kubenswrapper[4895]: I1206 06:59:25.051116 4895 scope.go:117] "RemoveContainer" containerID="b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785" Dec 06 06:59:25 crc kubenswrapper[4895]: I1206 06:59:25.911458 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/1.log" Dec 06 06:59:25 crc kubenswrapper[4895]: I1206 06:59:25.911942 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerStarted","Data":"5a1fa872656607f4f2f6459bef2c8d3fbd88222220f1eb200e4487d2fcca1c2c"} Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.049751 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.049826 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:26 crc kubenswrapper[4895]: E1206 06:59:26.050186 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:26 crc kubenswrapper[4895]: E1206 06:59:26.050279 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.050376 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.759348 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dzrsj"] Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.759566 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:26 crc kubenswrapper[4895]: E1206 06:59:26.759761 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.916983 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/3.log" Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.919380 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerStarted","Data":"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0"} Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.919844 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:59:26 crc kubenswrapper[4895]: I1206 06:59:26.952458 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podStartSLOduration=109.9524362 podStartE2EDuration="1m49.9524362s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:26.952171122 +0000 UTC m=+129.353560002" watchObservedRunningTime="2025-12-06 06:59:26.9524362 +0000 UTC m=+129.353825070" Dec 06 06:59:27 crc kubenswrapper[4895]: I1206 06:59:27.049918 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:27 crc kubenswrapper[4895]: E1206 06:59:27.050083 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:28 crc kubenswrapper[4895]: I1206 06:59:28.049868 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:28 crc kubenswrapper[4895]: I1206 06:59:28.049866 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:28 crc kubenswrapper[4895]: I1206 06:59:28.049980 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:28 crc kubenswrapper[4895]: E1206 06:59:28.051100 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:28 crc kubenswrapper[4895]: E1206 06:59:28.051243 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:28 crc kubenswrapper[4895]: E1206 06:59:28.051626 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:28 crc kubenswrapper[4895]: E1206 06:59:28.139216 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:59:29 crc kubenswrapper[4895]: I1206 06:59:29.049805 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:29 crc kubenswrapper[4895]: E1206 06:59:29.050100 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:30 crc kubenswrapper[4895]: I1206 06:59:30.050250 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:30 crc kubenswrapper[4895]: I1206 06:59:30.050250 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:30 crc kubenswrapper[4895]: E1206 06:59:30.050790 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:30 crc kubenswrapper[4895]: I1206 06:59:30.050274 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:30 crc kubenswrapper[4895]: E1206 06:59:30.050898 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:30 crc kubenswrapper[4895]: E1206 06:59:30.051004 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:31 crc kubenswrapper[4895]: I1206 06:59:31.050287 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:31 crc kubenswrapper[4895]: E1206 06:59:31.050741 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:32 crc kubenswrapper[4895]: I1206 06:59:32.050216 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:32 crc kubenswrapper[4895]: I1206 06:59:32.050264 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:32 crc kubenswrapper[4895]: I1206 06:59:32.050226 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:32 crc kubenswrapper[4895]: E1206 06:59:32.050386 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:59:32 crc kubenswrapper[4895]: E1206 06:59:32.050546 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:59:32 crc kubenswrapper[4895]: E1206 06:59:32.050745 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dzrsj" podUID="2c72bd78-81d3-48dc-96c3-50bc6bac88d6" Dec 06 06:59:33 crc kubenswrapper[4895]: I1206 06:59:33.049985 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:33 crc kubenswrapper[4895]: E1206 06:59:33.050118 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.050685 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.050797 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.050831 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.054506 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.055039 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.055185 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.055435 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.055587 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 06:59:34 crc kubenswrapper[4895]: I1206 06:59:34.055711 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 06:59:35 crc kubenswrapper[4895]: I1206 06:59:35.049880 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.905058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.990159 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm"] Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.990832 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.991711 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6"] Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.992025 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.993401 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd"] Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.993767 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.995094 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lf26x"] Dec 06 06:59:41 crc kubenswrapper[4895]: I1206 06:59:41.995593 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.001617 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.001740 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.001932 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.002013 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.002140 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.002303 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.002434 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.002807 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.002869 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.004332 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bnn9x"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.004713 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.004756 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.004763 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2xrjt"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.004761 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.004852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.005154 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.012846 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.013449 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qfxpm"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.013615 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.016732 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.019378 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.019430 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.019615 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.019823 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.019378 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.020183 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.020370 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.020602 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.021861 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gjt8r"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.026432 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.030864 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.030916 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.031081 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.032184 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.033771 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.033841 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.034271 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.038295 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.038641 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.042542 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.043280 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.044665 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.045394 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.045941 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.046742 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.047755 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.049957 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.050889 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053416 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053534 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053555 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053594 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053646 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053698 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053708 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053715 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053654 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053659 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053646 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.053985 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.054681 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.054723 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.057872 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.059492 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.060104 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.060202 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.060231 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.060985 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.061618 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.061770 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.061858 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.061974 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.062071 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.062623 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.063065 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.079158 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.080037 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.080137 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.080438 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.080658 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.080815 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.080879 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.081272 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.082631 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.082975 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.083776 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ggcv2"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.083908 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.084350 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.084400 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.084635 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.084712 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.085087 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.086594 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.092306 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-psx67"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.106568 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-52v65"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.106940 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2f6hz"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.107405 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.107866 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108239 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108633 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4pfdd"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.109186 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.109790 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.110106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.110726 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tj94r"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.111394 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.111565 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.111663 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.107868 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.111804 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108058 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108118 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108165 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.111399 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g8svq"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.111625 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108372 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108463 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.112595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108885 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108942 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.108990 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.109700 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.110666 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.112544 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bwphg"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.110696 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.115657 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.116203 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.116264 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.116315 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.116401 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.117234 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.119292 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.119944 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.122227 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.122651 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s5xjc"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.123650 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.124459 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.124756 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.124765 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.125207 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-lddxp"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.125490 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.126558 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.127020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.127038 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.127183 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.127384 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.127465 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.127882 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.128143 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.128150 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.129679 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.130465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.138340 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.139165 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.139783 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-stats-auth\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.139945 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-service-ca-bundle\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.140059 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-config\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.140654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/80151d03-97c4-44e3-be46-169472298c7e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.140810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7glg\" (UniqueName: \"kubernetes.io/projected/9dc66ecc-018b-48fe-beac-ddf62239c291-kube-api-access-z7glg\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.140923 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde9406-9da6-43ea-b1e7-b8638e8d0351-serving-cert\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141028 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80151d03-97c4-44e3-be46-169472298c7e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-default-certificate\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141214 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7140b73-ea35-4be4-90b5-eaa3aa946785-auth-proxy-config\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-service-ca-bundle\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141414 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80151d03-97c4-44e3-be46-169472298c7e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141683 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc62v\" (UniqueName: \"kubernetes.io/projected/80151d03-97c4-44e3-be46-169472298c7e-kube-api-access-mc62v\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141788 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.141899 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc66ecc-018b-48fe-beac-ddf62239c291-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142053 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b7140b73-ea35-4be4-90b5-eaa3aa946785-images\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142164 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvvs\" (UniqueName: \"kubernetes.io/projected/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-kube-api-access-gqvvs\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142255 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-config\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-client-ca\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-serving-cert\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142514 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjrz\" (UniqueName: \"kubernetes.io/projected/b7140b73-ea35-4be4-90b5-eaa3aa946785-kube-api-access-lcjrz\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142714 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz4xm\" (UniqueName: \"kubernetes.io/projected/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-kube-api-access-hz4xm\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142776 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc66ecc-018b-48fe-beac-ddf62239c291-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142805 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7140b73-ea35-4be4-90b5-eaa3aa946785-proxy-tls\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142828 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-metrics-certs\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142866 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4vj\" (UniqueName: \"kubernetes.io/projected/4dc1e914-43fd-450e-922c-6462f78105f9-kube-api-access-wc4vj\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142907 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdt95\" (UniqueName: \"kubernetes.io/projected/fbde9406-9da6-43ea-b1e7-b8638e8d0351-kube-api-access-jdt95\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.142929 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.149923 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.150109 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.151921 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.160251 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.161677 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.161831 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.166817 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.173105 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.174890 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.175958 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b9khb"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.177445 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.177843 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.181119 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.182771 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.184138 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.188132 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qnrd2"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.193874 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.196111 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.197100 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.200314 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.205225 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bnn9x"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.209445 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lf26x"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.211447 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qfxpm"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.212764 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.212905 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.214279 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.217010 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.218120 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tj94r"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.219083 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.220123 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-psx67"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.221118 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lddxp"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.222119 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.223071 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sqdms"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.224044 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qnrd2"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.224066 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.225094 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s5xjc"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.226328 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.227312 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-52v65"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.228276 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g8svq"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.229248 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.230248 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.231453 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gjt8r"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.232450 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.232631 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.233623 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.234719 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.235760 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2f6hz"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.236683 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4pfdd"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.237872 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.240325 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ggcv2"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.242926 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8tlz8"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.243628 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.243819 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7140b73-ea35-4be4-90b5-eaa3aa946785-proxy-tls\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.243869 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.243902 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-metrics-certs\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.243927 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4vj\" (UniqueName: \"kubernetes.io/projected/4dc1e914-43fd-450e-922c-6462f78105f9-kube-api-access-wc4vj\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.243977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b39f78c-cc46-4be7-89b9-a7503dec0a10-metrics-tls\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdt95\" (UniqueName: \"kubernetes.io/projected/fbde9406-9da6-43ea-b1e7-b8638e8d0351-kube-api-access-jdt95\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244068 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-stats-auth\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244095 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-service-ca-bundle\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244121 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-config\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244149 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/80151d03-97c4-44e3-be46-169472298c7e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244176 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7glg\" (UniqueName: \"kubernetes.io/projected/9dc66ecc-018b-48fe-beac-ddf62239c291-kube-api-access-z7glg\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244206 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde9406-9da6-43ea-b1e7-b8638e8d0351-serving-cert\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244233 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80151d03-97c4-44e3-be46-169472298c7e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-default-certificate\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244287 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9c2886-00ed-443b-8706-8157ab88a96c-config\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244313 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7140b73-ea35-4be4-90b5-eaa3aa946785-auth-proxy-config\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244339 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-service-ca-bundle\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244374 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244398 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kfwsb"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244407 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80151d03-97c4-44e3-be46-169472298c7e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244435 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc62v\" (UniqueName: \"kubernetes.io/projected/80151d03-97c4-44e3-be46-169472298c7e-kube-api-access-mc62v\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244501 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc66ecc-018b-48fe-beac-ddf62239c291-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244558 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9c2886-00ed-443b-8706-8157ab88a96c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244586 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b7140b73-ea35-4be4-90b5-eaa3aa946785-images\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244631 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvvs\" (UniqueName: \"kubernetes.io/projected/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-kube-api-access-gqvvs\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244670 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9c2886-00ed-443b-8706-8157ab88a96c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b39f78c-cc46-4be7-89b9-a7503dec0a10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244731 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmfs2\" (UniqueName: \"kubernetes.io/projected/4b39f78c-cc46-4be7-89b9-a7503dec0a10-kube-api-access-lmfs2\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244769 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-config\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-client-ca\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244820 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-serving-cert\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244851 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcjrz\" (UniqueName: \"kubernetes.io/projected/b7140b73-ea35-4be4-90b5-eaa3aa946785-kube-api-access-lcjrz\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244885 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz4xm\" (UniqueName: \"kubernetes.io/projected/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-kube-api-access-hz4xm\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244913 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b39f78c-cc46-4be7-89b9-a7503dec0a10-trusted-ca\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.244945 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc66ecc-018b-48fe-beac-ddf62239c291-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245205 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245394 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-service-ca-bundle\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245536 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245581 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245807 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-config\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.245908 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc66ecc-018b-48fe-beac-ddf62239c291-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.246576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b7140b73-ea35-4be4-90b5-eaa3aa946785-images\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.247066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80151d03-97c4-44e3-be46-169472298c7e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.247311 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7140b73-ea35-4be4-90b5-eaa3aa946785-auth-proxy-config\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.247377 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.247981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-config\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.248789 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-service-ca-bundle\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.248984 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-client-ca\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.249040 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.250154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-stats-auth\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.250188 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.250369 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc66ecc-018b-48fe-beac-ddf62239c291-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.250635 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7140b73-ea35-4be4-90b5-eaa3aa946785-proxy-tls\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.252035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/80151d03-97c4-44e3-be46-169472298c7e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.252441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.252439 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.253333 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.253339 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde9406-9da6-43ea-b1e7-b8638e8d0351-serving-cert\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.254133 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.254503 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-default-certificate\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.254857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-serving-cert\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.255266 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.256570 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b9khb"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.257501 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bwphg"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.258429 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.263547 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sqdms"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.264196 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-metrics-certs\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.267309 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.267428 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kfwsb"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.267445 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8tlz8"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.270897 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6hhlt"] Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.271674 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.273139 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.293425 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.312842 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.338693 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346405 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b39f78c-cc46-4be7-89b9-a7503dec0a10-metrics-tls\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9c2886-00ed-443b-8706-8157ab88a96c-config\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346617 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9c2886-00ed-443b-8706-8157ab88a96c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9c2886-00ed-443b-8706-8157ab88a96c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346674 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b39f78c-cc46-4be7-89b9-a7503dec0a10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmfs2\" (UniqueName: \"kubernetes.io/projected/4b39f78c-cc46-4be7-89b9-a7503dec0a10-kube-api-access-lmfs2\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.346728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b39f78c-cc46-4be7-89b9-a7503dec0a10-trusted-ca\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.348026 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9c2886-00ed-443b-8706-8157ab88a96c-config\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.349265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b39f78c-cc46-4be7-89b9-a7503dec0a10-trusted-ca\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.350875 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b39f78c-cc46-4be7-89b9-a7503dec0a10-metrics-tls\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.352654 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.362621 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9c2886-00ed-443b-8706-8157ab88a96c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.373615 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.394635 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.413260 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.433375 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.453059 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.473025 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.492394 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.513244 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.533451 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.554209 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.573937 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.594462 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.614769 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.654413 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.673754 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.693672 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.713720 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.733700 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.754117 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.774303 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.793751 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.813308 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.834613 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.878872 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.879087 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.898537 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.914604 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.932908 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.953059 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.974623 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 06:59:42 crc kubenswrapper[4895]: I1206 06:59:42.993355 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.012996 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.033116 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.054806 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.073913 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.093695 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.114216 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.131959 4895 request.go:700] Waited for 1.006263061s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.134302 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.153281 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.173150 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.193226 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.214642 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.232745 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.253611 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.274009 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.293589 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.314186 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.334223 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.354255 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.373701 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.393951 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.412826 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.433409 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.453379 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.494678 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.514121 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.534357 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.554109 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.573720 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.594955 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.614209 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.634545 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.653887 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.673710 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.694373 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.714256 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.737670 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.758441 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.773886 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.794317 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.813962 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.833333 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.853777 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.874137 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.893617 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.914541 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.934545 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.953702 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 06:59:43 crc kubenswrapper[4895]: I1206 06:59:43.992182 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4vj\" (UniqueName: \"kubernetes.io/projected/4dc1e914-43fd-450e-922c-6462f78105f9-kube-api-access-wc4vj\") pod \"marketplace-operator-79b997595-bnn9x\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.013568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdt95\" (UniqueName: \"kubernetes.io/projected/fbde9406-9da6-43ea-b1e7-b8638e8d0351-kube-api-access-jdt95\") pod \"controller-manager-879f6c89f-qfxpm\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.014714 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.034348 4895 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.054088 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.087499 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvvs\" (UniqueName: \"kubernetes.io/projected/c2248ee7-0953-48b0-bcaf-e95d8560c4b6-kube-api-access-gqvvs\") pod \"router-default-5444994796-2xrjt\" (UID: \"c2248ee7-0953-48b0-bcaf-e95d8560c4b6\") " pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.093928 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.128981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc62v\" (UniqueName: \"kubernetes.io/projected/80151d03-97c4-44e3-be46-169472298c7e-kube-api-access-mc62v\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.146431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7glg\" (UniqueName: \"kubernetes.io/projected/9dc66ecc-018b-48fe-beac-ddf62239c291-kube-api-access-z7glg\") pod \"openshift-apiserver-operator-796bbdcf4f-2wjg6\" (UID: \"9dc66ecc-018b-48fe-beac-ddf62239c291\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.151225 4895 request.go:700] Waited for 1.90206511s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.170283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz4xm\" (UniqueName: \"kubernetes.io/projected/6b01fabb-ca2e-4820-9b1c-b821a9bf4084-kube-api-access-hz4xm\") pod \"authentication-operator-69f744f599-lf26x\" (UID: \"6b01fabb-ca2e-4820-9b1c-b821a9bf4084\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.187715 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcjrz\" (UniqueName: \"kubernetes.io/projected/b7140b73-ea35-4be4-90b5-eaa3aa946785-kube-api-access-lcjrz\") pod \"machine-config-operator-74547568cd-smwwm\" (UID: \"b7140b73-ea35-4be4-90b5-eaa3aa946785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.189465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.206327 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.210810 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80151d03-97c4-44e3-be46-169472298c7e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rrcdd\" (UID: \"80151d03-97c4-44e3-be46-169472298c7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.213246 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.227374 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.233984 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 06:59:44 crc kubenswrapper[4895]: W1206 06:59:44.241777 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2248ee7_0953_48b0_bcaf_e95d8560c4b6.slice/crio-9c6c05b8926f32fa0953e25fc4f531a8899184fb4be9eee8c1002b3191e1e002 WatchSource:0}: Error finding container 9c6c05b8926f32fa0953e25fc4f531a8899184fb4be9eee8c1002b3191e1e002: Status 404 returned error can't find the container with id 9c6c05b8926f32fa0953e25fc4f531a8899184fb4be9eee8c1002b3191e1e002 Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.245815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.253505 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.334968 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9c2886-00ed-443b-8706-8157ab88a96c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xwz9c\" (UID: \"cd9c2886-00ed-443b-8706-8157ab88a96c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.347014 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmfs2\" (UniqueName: \"kubernetes.io/projected/4b39f78c-cc46-4be7-89b9-a7503dec0a10-kube-api-access-lmfs2\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.349667 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b39f78c-cc46-4be7-89b9-a7503dec0a10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-psx67\" (UID: \"4b39f78c-cc46-4be7-89b9-a7503dec0a10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.387029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389567 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-etcd-client\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-oauth-config\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/987ff7e9-f709-456b-bc00-f029e6a11f4c-signing-key\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389662 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfaeafa9-95c8-4064-b167-e3a3e56790c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547k6\" (UniqueName: \"kubernetes.io/projected/a3ce3943-7a02-46e7-bf84-30d30080b111-kube-api-access-547k6\") pod \"control-plane-machine-set-operator-78cbb6b69f-9mczr\" (UID: \"a3ce3943-7a02-46e7-bf84-30d30080b111\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389705 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49nr\" (UniqueName: \"kubernetes.io/projected/389b4065-1992-4508-ae57-601dbfec42b6-kube-api-access-p49nr\") pod \"multus-admission-controller-857f4d67dd-bwphg\" (UID: \"389b4065-1992-4508-ae57-601dbfec42b6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389727 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-console-config\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389759 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-certificates\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389778 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgs8\" (UniqueName: \"kubernetes.io/projected/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-kube-api-access-4mgs8\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389805 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.389841 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c669ea24-666a-4152-9a1e-43f614bf8e21-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390054 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390078 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-profile-collector-cert\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390120 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-etcd-serving-ca\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-ca\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390164 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cdf070-0b48-4632-ad95-c1dc562a00aa-config\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390197 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtg54\" (UniqueName: \"kubernetes.io/projected/6ebeda1e-dfdd-42d2-8359-32902e14273b-kube-api-access-jtg54\") pod \"migrator-59844c95c7-jt7gr\" (UID: \"6ebeda1e-dfdd-42d2-8359-32902e14273b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tw4\" (UniqueName: \"kubernetes.io/projected/8a8b089f-b74b-4d12-ad6f-0e611b078120-kube-api-access-44tw4\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390268 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-service-ca\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390287 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-serving-cert\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7c167c9f-5de9-44d9-8367-56975116a496-tmpfs\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-config\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390350 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/987ff7e9-f709-456b-bc00-f029e6a11f4c-signing-cabundle\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390370 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv55h\" (UniqueName: \"kubernetes.io/projected/ee4a193d-89fb-4c16-9aed-3c5868b417c3-kube-api-access-kv55h\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390395 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-encryption-config\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-service-ca\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390465 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-audit-policies\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390561 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-trusted-ca-bundle\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390590 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ca85ad-de88-4503-b27c-cfeaa96ae436-serving-cert\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/16ca85ad-de88-4503-b27c-cfeaa96ae436-kube-api-access-tvqv5\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390643 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2ds\" (UniqueName: \"kubernetes.io/projected/aff87238-1a39-4885-a738-2e3a8c674c8c-kube-api-access-gv2ds\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-tls\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390696 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrbd\" (UniqueName: \"kubernetes.io/projected/f740e915-a41a-4bfb-a4fa-1b33903fecd6-kube-api-access-5wrbd\") pod \"downloads-7954f5f757-lddxp\" (UID: \"f740e915-a41a-4bfb-a4fa-1b33903fecd6\") " pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6966575-116d-4708-a4dc-4aa061e9b665-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390816 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-serving-cert\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390841 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc5343c-2ae2-4edf-b975-96cc492ca434-images\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390870 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zkh\" (UniqueName: \"kubernetes.io/projected/9f9c91df-cb75-43ed-9677-9f2409edba07-kube-api-access-j8zkh\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90cdf070-0b48-4632-ad95-c1dc562a00aa-auth-proxy-config\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390934 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc5343c-2ae2-4edf-b975-96cc492ca434-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.390961 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv87g\" (UniqueName: \"kubernetes.io/projected/7c167c9f-5de9-44d9-8367-56975116a496-kube-api-access-jv87g\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlfx9\" (UniqueName: \"kubernetes.io/projected/dfdea3f4-194e-49e8-88f2-4170d677cee9-kube-api-access-zlfx9\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391081 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aff87238-1a39-4885-a738-2e3a8c674c8c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391108 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmnxq\" (UniqueName: \"kubernetes.io/projected/90cdf070-0b48-4632-ad95-c1dc562a00aa-kube-api-access-dmnxq\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391132 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-trusted-ca\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6966575-116d-4708-a4dc-4aa061e9b665-proxy-tls\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391373 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-audit-policies\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391396 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90cdf070-0b48-4632-ad95-c1dc562a00aa-machine-approver-tls\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391421 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391449 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-serving-cert\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2l4n\" (UniqueName: \"kubernetes.io/projected/b6966575-116d-4708-a4dc-4aa061e9b665-kube-api-access-g2l4n\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391512 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-etcd-client\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391550 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd5ws\" (UniqueName: \"kubernetes.io/projected/688a9d65-4700-47fb-a150-723f9c21b054-kube-api-access-rd5ws\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c167c9f-5de9-44d9-8367-56975116a496-apiservice-cert\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-client-ca\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391626 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfaeafa9-95c8-4064-b167-e3a3e56790c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391651 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfaeafa9-95c8-4064-b167-e3a3e56790c8-config\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391688 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ce3943-7a02-46e7-bf84-30d30080b111-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9mczr\" (UID: \"a3ce3943-7a02-46e7-bf84-30d30080b111\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391717 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff87238-1a39-4885-a738-2e3a8c674c8c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/210b89a8-666c-4aab-a64d-e37987eed3f0-audit-dir\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391774 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-oauth-serving-cert\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391794 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ntp\" (UniqueName: \"kubernetes.io/projected/3864596e-56f8-46a1-95e6-3558c161cd02-kube-api-access-q4ntp\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391815 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c167c9f-5de9-44d9-8367-56975116a496-webhook-cert\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391835 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fd79927-1699-47f7-932e-30961a406e41-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hpkds\" (UID: \"5fd79927-1699-47f7-932e-30961a406e41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391858 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391882 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-image-import-ca\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391920 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391942 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc5343c-2ae2-4edf-b975-96cc492ca434-config\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee4a193d-89fb-4c16-9aed-3c5868b417c3-audit-dir\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.391995 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pp7x\" (UniqueName: \"kubernetes.io/projected/fcc5343c-2ae2-4edf-b975-96cc492ca434-kube-api-access-5pp7x\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392054 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/688a9d65-4700-47fb-a150-723f9c21b054-audit-dir\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392083 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-audit\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392107 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89296d2e-cc06-49b2-98d3-61fbf8ac3a77-metrics-tls\") pod \"dns-operator-744455d44c-4pfdd\" (UID: \"89296d2e-cc06-49b2-98d3-61fbf8ac3a77\") " pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392131 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ppgh\" (UniqueName: \"kubernetes.io/projected/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-kube-api-access-5ppgh\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-config\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxbc\" (UniqueName: \"kubernetes.io/projected/210b89a8-666c-4aab-a64d-e37987eed3f0-kube-api-access-phxbc\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392205 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f9c91df-cb75-43ed-9677-9f2409edba07-secret-volume\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdea3f4-194e-49e8-88f2-4170d677cee9-serving-cert\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392251 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8b089f-b74b-4d12-ad6f-0e611b078120-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392418 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-config\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392489 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-config\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392537 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f9c91df-cb75-43ed-9677-9f2409edba07-config-volume\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392567 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-encryption-config\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392594 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2kv\" (UniqueName: \"kubernetes.io/projected/5fd79927-1699-47f7-932e-30961a406e41-kube-api-access-kv2kv\") pod \"cluster-samples-operator-665b6dd947-hpkds\" (UID: \"5fd79927-1699-47f7-932e-30961a406e41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.392618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-serving-cert\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.393346 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-client\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.393380 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.393453 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.893435789 +0000 UTC m=+147.294824849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.393652 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ds5\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-kube-api-access-w6ds5\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.393735 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-srv-cert\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.393981 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394346 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czddh\" (UniqueName: \"kubernetes.io/projected/89296d2e-cc06-49b2-98d3-61fbf8ac3a77-kube-api-access-czddh\") pod \"dns-operator-744455d44c-4pfdd\" (UID: \"89296d2e-cc06-49b2-98d3-61fbf8ac3a77\") " pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394430 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/389b4065-1992-4508-ae57-601dbfec42b6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bwphg\" (UID: \"389b4065-1992-4508-ae57-601dbfec42b6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394547 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394706 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a8b089f-b74b-4d12-ad6f-0e611b078120-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.394733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9zd\" (UniqueName: \"kubernetes.io/projected/987ff7e9-f709-456b-bc00-f029e6a11f4c-kube-api-access-bc9zd\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.395017 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee4a193d-89fb-4c16-9aed-3c5868b417c3-node-pullsecrets\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.395041 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c669ea24-666a-4152-9a1e-43f614bf8e21-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.395060 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-bound-sa-token\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.423637 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.440983 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.458990 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.466120 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bnn9x"] Dec 06 06:59:44 crc kubenswrapper[4895]: W1206 06:59:44.484387 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc1e914_43fd_450e_922c_6462f78105f9.slice/crio-06e98f62be8d971557eaf82d11c83af6366c980308b310cbd2734998858bc8f6 WatchSource:0}: Error finding container 06e98f62be8d971557eaf82d11c83af6366c980308b310cbd2734998858bc8f6: Status 404 returned error can't find the container with id 06e98f62be8d971557eaf82d11c83af6366c980308b310cbd2734998858bc8f6 Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.496370 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.496670 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:44.996631169 +0000 UTC m=+147.398020029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.496875 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfaeafa9-95c8-4064-b167-e3a3e56790c8-config\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.496919 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ce3943-7a02-46e7-bf84-30d30080b111-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9mczr\" (UID: \"a3ce3943-7a02-46e7-bf84-30d30080b111\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.496945 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff87238-1a39-4885-a738-2e3a8c674c8c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.496975 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnnq\" (UniqueName: \"kubernetes.io/projected/5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e-kube-api-access-wtnnq\") pod \"package-server-manager-789f6589d5-5g4fx\" (UID: \"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497004 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/210b89a8-666c-4aab-a64d-e37987eed3f0-audit-dir\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-oauth-serving-cert\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ntp\" (UniqueName: \"kubernetes.io/projected/3864596e-56f8-46a1-95e6-3558c161cd02-kube-api-access-q4ntp\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497068 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c167c9f-5de9-44d9-8367-56975116a496-webhook-cert\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497088 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fd79927-1699-47f7-932e-30961a406e41-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hpkds\" (UID: \"5fd79927-1699-47f7-932e-30961a406e41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-image-import-ca\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497156 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497174 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9ec3a749-a236-448b-97f1-4e92cd1ade7f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc5343c-2ae2-4edf-b975-96cc492ca434-config\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee4a193d-89fb-4c16-9aed-3c5868b417c3-audit-dir\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497234 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497257 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pp7x\" (UniqueName: \"kubernetes.io/projected/fcc5343c-2ae2-4edf-b975-96cc492ca434-kube-api-access-5pp7x\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/688a9d65-4700-47fb-a150-723f9c21b054-audit-dir\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497293 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-audit\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497320 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89296d2e-cc06-49b2-98d3-61fbf8ac3a77-metrics-tls\") pod \"dns-operator-744455d44c-4pfdd\" (UID: \"89296d2e-cc06-49b2-98d3-61fbf8ac3a77\") " pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27c5c363-c056-4033-8f87-20e0221d9e04-cert\") pod \"ingress-canary-8tlz8\" (UID: \"27c5c363-c056-4033-8f87-20e0221d9e04\") " pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ppgh\" (UniqueName: \"kubernetes.io/projected/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-kube-api-access-5ppgh\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497384 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-config\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497407 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxbc\" (UniqueName: \"kubernetes.io/projected/210b89a8-666c-4aab-a64d-e37987eed3f0-kube-api-access-phxbc\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497427 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f9c91df-cb75-43ed-9677-9f2409edba07-secret-volume\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6463f080-fa44-4fd8-b559-11f5056ffd0a-config-volume\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdea3f4-194e-49e8-88f2-4170d677cee9-serving-cert\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497517 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8b089f-b74b-4d12-ad6f-0e611b078120-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497566 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee4a193d-89fb-4c16-9aed-3c5868b417c3-audit-dir\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497561 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55dh\" (UniqueName: \"kubernetes.io/projected/6463f080-fa44-4fd8-b559-11f5056ffd0a-kube-api-access-r55dh\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-config\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497669 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-config\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497691 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec3a749-a236-448b-97f1-4e92cd1ade7f-serving-cert\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497725 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f9c91df-cb75-43ed-9677-9f2409edba07-config-volume\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-encryption-config\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486a83f8-907b-441c-aae7-428a6e22d689-srv-cert\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497783 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2kv\" (UniqueName: \"kubernetes.io/projected/5fd79927-1699-47f7-932e-30961a406e41-kube-api-access-kv2kv\") pod \"cluster-samples-operator-665b6dd947-hpkds\" (UID: \"5fd79927-1699-47f7-932e-30961a406e41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497801 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-serving-cert\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497819 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497842 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-client\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497861 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497909 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-node-bootstrap-token\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497930 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ds5\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-kube-api-access-w6ds5\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497949 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-srv-cert\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.497987 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498004 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czddh\" (UniqueName: \"kubernetes.io/projected/89296d2e-cc06-49b2-98d3-61fbf8ac3a77-kube-api-access-czddh\") pod \"dns-operator-744455d44c-4pfdd\" (UID: \"89296d2e-cc06-49b2-98d3-61fbf8ac3a77\") " pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498023 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/389b4065-1992-4508-ae57-601dbfec42b6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bwphg\" (UID: \"389b4065-1992-4508-ae57-601dbfec42b6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498065 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22np\" (UniqueName: \"kubernetes.io/projected/27c5c363-c056-4033-8f87-20e0221d9e04-kube-api-access-v22np\") pod \"ingress-canary-8tlz8\" (UID: \"27c5c363-c056-4033-8f87-20e0221d9e04\") " pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498099 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff87238-1a39-4885-a738-2e3a8c674c8c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-registration-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498179 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a8b089f-b74b-4d12-ad6f-0e611b078120-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498214 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9zd\" (UniqueName: \"kubernetes.io/projected/987ff7e9-f709-456b-bc00-f029e6a11f4c-kube-api-access-bc9zd\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498233 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee4a193d-89fb-4c16-9aed-3c5868b417c3-node-pullsecrets\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498281 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c669ea24-666a-4152-9a1e-43f614bf8e21-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498299 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-bound-sa-token\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498318 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-csi-data-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498339 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-socket-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498360 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-etcd-client\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498362 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/688a9d65-4700-47fb-a150-723f9c21b054-audit-dir\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498377 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-oauth-config\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498401 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/987ff7e9-f709-456b-bc00-f029e6a11f4c-signing-key\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498420 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfaeafa9-95c8-4064-b167-e3a3e56790c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498438 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547k6\" (UniqueName: \"kubernetes.io/projected/a3ce3943-7a02-46e7-bf84-30d30080b111-kube-api-access-547k6\") pod \"control-plane-machine-set-operator-78cbb6b69f-9mczr\" (UID: \"a3ce3943-7a02-46e7-bf84-30d30080b111\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49nr\" (UniqueName: \"kubernetes.io/projected/389b4065-1992-4508-ae57-601dbfec42b6-kube-api-access-p49nr\") pod \"multus-admission-controller-857f4d67dd-bwphg\" (UID: \"389b4065-1992-4508-ae57-601dbfec42b6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjdrf\" (UniqueName: \"kubernetes.io/projected/95951ed2-c176-4e12-8dc3-58c9a19e9406-kube-api-access-zjdrf\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498530 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64e03bd-fcd5-4b0d-84b9-841278f6560d-config\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-console-config\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498591 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-certificates\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498610 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgs8\" (UniqueName: \"kubernetes.io/projected/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-kube-api-access-4mgs8\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498645 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c669ea24-666a-4152-9a1e-43f614bf8e21-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498662 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-mountpoint-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498684 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498701 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-profile-collector-cert\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498745 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-etcd-serving-ca\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498774 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-ca\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498793 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cdf070-0b48-4632-ad95-c1dc562a00aa-config\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498809 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-plugins-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtg54\" (UniqueName: \"kubernetes.io/projected/6ebeda1e-dfdd-42d2-8359-32902e14273b-kube-api-access-jtg54\") pod \"migrator-59844c95c7-jt7gr\" (UID: \"6ebeda1e-dfdd-42d2-8359-32902e14273b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498850 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tw4\" (UniqueName: \"kubernetes.io/projected/8a8b089f-b74b-4d12-ad6f-0e611b078120-kube-api-access-44tw4\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498868 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64e03bd-fcd5-4b0d-84b9-841278f6560d-serving-cert\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498890 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-service-ca\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-serving-cert\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498921 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7c167c9f-5de9-44d9-8367-56975116a496-tmpfs\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-config\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.498952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/987ff7e9-f709-456b-bc00-f029e6a11f4c-signing-cabundle\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.499029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-audit\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.499961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfaeafa9-95c8-4064-b167-e3a3e56790c8-config\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.500112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/210b89a8-666c-4aab-a64d-e37987eed3f0-audit-dir\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.500752 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-oauth-serving-cert\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501132 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv55h\" (UniqueName: \"kubernetes.io/projected/ee4a193d-89fb-4c16-9aed-3c5868b417c3-kube-api-access-kv55h\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-encryption-config\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501176 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-service-ca\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501197 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-certs\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501232 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-audit-policies\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c64e03bd-fcd5-4b0d-84b9-841278f6560d-trusted-ca\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501269 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-trusted-ca-bundle\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501286 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ca85ad-de88-4503-b27c-cfeaa96ae436-serving-cert\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/16ca85ad-de88-4503-b27c-cfeaa96ae436-kube-api-access-tvqv5\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501317 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2ds\" (UniqueName: \"kubernetes.io/projected/aff87238-1a39-4885-a738-2e3a8c674c8c-kube-api-access-gv2ds\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6463f080-fa44-4fd8-b559-11f5056ffd0a-metrics-tls\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-tls\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501370 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrbd\" (UniqueName: \"kubernetes.io/projected/f740e915-a41a-4bfb-a4fa-1b33903fecd6-kube-api-access-5wrbd\") pod \"downloads-7954f5f757-lddxp\" (UID: \"f740e915-a41a-4bfb-a4fa-1b33903fecd6\") " pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501387 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2642\" (UniqueName: \"kubernetes.io/projected/9ec3a749-a236-448b-97f1-4e92cd1ade7f-kube-api-access-m2642\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501416 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90cdf070-0b48-4632-ad95-c1dc562a00aa-auth-proxy-config\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501434 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g4fx\" (UID: \"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6966575-116d-4708-a4dc-4aa061e9b665-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-serving-cert\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501526 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc5343c-2ae2-4edf-b975-96cc492ca434-images\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8zkh\" (UniqueName: \"kubernetes.io/projected/9f9c91df-cb75-43ed-9677-9f2409edba07-kube-api-access-j8zkh\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501576 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbprj\" (UniqueName: \"kubernetes.io/projected/c64e03bd-fcd5-4b0d-84b9-841278f6560d-kube-api-access-tbprj\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.501926 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7c167c9f-5de9-44d9-8367-56975116a496-tmpfs\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.502499 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-config\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.503091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/987ff7e9-f709-456b-bc00-f029e6a11f4c-signing-cabundle\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.504273 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.004251803 +0000 UTC m=+147.405640673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.505039 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6966575-116d-4708-a4dc-4aa061e9b665-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.505557 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.505866 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc5343c-2ae2-4edf-b975-96cc492ca434-images\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.505981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-console-config\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.506033 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-client\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.506210 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-config\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.506536 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c669ea24-666a-4152-9a1e-43f614bf8e21-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.506934 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc5343c-2ae2-4edf-b975-96cc492ca434-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.506972 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv87g\" (UniqueName: \"kubernetes.io/projected/7c167c9f-5de9-44d9-8367-56975116a496-kube-api-access-jv87g\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.507006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlfx9\" (UniqueName: \"kubernetes.io/projected/dfdea3f4-194e-49e8-88f2-4170d677cee9-kube-api-access-zlfx9\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.507539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-audit-policies\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.507025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.507985 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-config\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.507996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aff87238-1a39-4885-a738-2e3a8c674c8c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508044 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/486a83f8-907b-441c-aae7-428a6e22d689-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-certificates\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmnxq\" (UniqueName: \"kubernetes.io/projected/90cdf070-0b48-4632-ad95-c1dc562a00aa-kube-api-access-dmnxq\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508484 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-trusted-ca\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508511 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6966575-116d-4708-a4dc-4aa061e9b665-proxy-tls\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508534 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-audit-policies\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90cdf070-0b48-4632-ad95-c1dc562a00aa-machine-approver-tls\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508589 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-serving-cert\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508605 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2l4n\" (UniqueName: \"kubernetes.io/projected/b6966575-116d-4708-a4dc-4aa061e9b665-kube-api-access-g2l4n\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcl5q\" (UniqueName: \"kubernetes.io/projected/486a83f8-907b-441c-aae7-428a6e22d689-kube-api-access-mcl5q\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508670 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-etcd-client\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508722 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd5ws\" (UniqueName: \"kubernetes.io/projected/688a9d65-4700-47fb-a150-723f9c21b054-kube-api-access-rd5ws\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c167c9f-5de9-44d9-8367-56975116a496-apiservice-cert\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508742 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a8b089f-b74b-4d12-ad6f-0e611b078120-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-client-ca\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508827 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfaeafa9-95c8-4064-b167-e3a3e56790c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.508867 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nff9\" (UniqueName: \"kubernetes.io/projected/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-kube-api-access-2nff9\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.517938 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-client-ca\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.518383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-config\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.518570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-service-ca\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.518902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.519206 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-trusted-ca-bundle\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.519600 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c167c9f-5de9-44d9-8367-56975116a496-webhook-cert\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.519655 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.519957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.519998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee4a193d-89fb-4c16-9aed-3c5868b417c3-node-pullsecrets\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.520268 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ce3943-7a02-46e7-bf84-30d30080b111-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9mczr\" (UID: \"a3ce3943-7a02-46e7-bf84-30d30080b111\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.520425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-serving-cert\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.520664 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-encryption-config\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.520889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-serving-cert\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.521156 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89296d2e-cc06-49b2-98d3-61fbf8ac3a77-metrics-tls\") pod \"dns-operator-744455d44c-4pfdd\" (UID: \"89296d2e-cc06-49b2-98d3-61fbf8ac3a77\") " pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.521765 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-srv-cert\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.521966 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.522231 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210b89a8-666c-4aab-a64d-e37987eed3f0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.522574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfaeafa9-95c8-4064-b167-e3a3e56790c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.522841 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f9c91df-cb75-43ed-9677-9f2409edba07-config-volume\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.522847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-tls\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.522935 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-encryption-config\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.523256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-oauth-config\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.523364 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-etcd-serving-ca\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.524140 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dfdea3f4-194e-49e8-88f2-4170d677cee9-etcd-ca\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.524325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/210b89a8-666c-4aab-a64d-e37987eed3f0-etcd-client\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.524356 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.525585 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fd79927-1699-47f7-932e-30961a406e41-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hpkds\" (UID: \"5fd79927-1699-47f7-932e-30961a406e41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.528124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90cdf070-0b48-4632-ad95-c1dc562a00aa-auth-proxy-config\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.530794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8b089f-b74b-4d12-ad6f-0e611b078120-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.533207 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdea3f4-194e-49e8-88f2-4170d677cee9-serving-cert\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.534310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc5343c-2ae2-4edf-b975-96cc492ca434-config\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.535805 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ca85ad-de88-4503-b27c-cfeaa96ae436-serving-cert\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.536011 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.536040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-serving-cert\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.537975 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.539173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c669ea24-666a-4152-9a1e-43f614bf8e21-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.539609 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-trusted-ca\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.540349 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-audit-policies\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.540808 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-serving-cert\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.541276 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee4a193d-89fb-4c16-9aed-3c5868b417c3-image-import-ca\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.541598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-service-ca\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.541612 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.542693 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f9c91df-cb75-43ed-9677-9f2409edba07-secret-volume\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.543018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.543412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cdf070-0b48-4632-ad95-c1dc562a00aa-config\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.545270 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/987ff7e9-f709-456b-bc00-f029e6a11f4c-signing-key\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.547489 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/389b4065-1992-4508-ae57-601dbfec42b6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bwphg\" (UID: \"389b4065-1992-4508-ae57-601dbfec42b6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.548072 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90cdf070-0b48-4632-ad95-c1dc562a00aa-machine-approver-tls\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.548564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6966575-116d-4708-a4dc-4aa061e9b665-proxy-tls\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.549164 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qfxpm"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.550784 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee4a193d-89fb-4c16-9aed-3c5868b417c3-etcd-client\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.550810 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547k6\" (UniqueName: \"kubernetes.io/projected/a3ce3943-7a02-46e7-bf84-30d30080b111-kube-api-access-547k6\") pod \"control-plane-machine-set-operator-78cbb6b69f-9mczr\" (UID: \"a3ce3943-7a02-46e7-bf84-30d30080b111\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.555676 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pp7x\" (UniqueName: \"kubernetes.io/projected/fcc5343c-2ae2-4edf-b975-96cc492ca434-kube-api-access-5pp7x\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.557204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.557445 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c167c9f-5de9-44d9-8367-56975116a496-apiservice-cert\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.557618 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc5343c-2ae2-4edf-b975-96cc492ca434-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2f6hz\" (UID: \"fcc5343c-2ae2-4edf-b975-96cc492ca434\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.563218 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-profile-collector-cert\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.563414 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aff87238-1a39-4885-a738-2e3a8c674c8c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.564081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.565936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.578667 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.586265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ntp\" (UniqueName: \"kubernetes.io/projected/3864596e-56f8-46a1-95e6-3558c161cd02-kube-api-access-q4ntp\") pod \"console-f9d7485db-52v65\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.599319 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv55h\" (UniqueName: \"kubernetes.io/projected/ee4a193d-89fb-4c16-9aed-3c5868b417c3-kube-api-access-kv55h\") pod \"apiserver-76f77b778f-gjt8r\" (UID: \"ee4a193d-89fb-4c16-9aed-3c5868b417c3\") " pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.611033 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ppgh\" (UniqueName: \"kubernetes.io/projected/e5de77d1-fbbb-4108-adc9-bf13d86dca4a-kube-api-access-5ppgh\") pod \"service-ca-operator-777779d784-6k8wz\" (UID: \"e5de77d1-fbbb-4108-adc9-bf13d86dca4a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.611242 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.618553 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.618865 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.118825602 +0000 UTC m=+147.520214472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcl5q\" (UniqueName: \"kubernetes.io/projected/486a83f8-907b-441c-aae7-428a6e22d689-kube-api-access-mcl5q\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nff9\" (UniqueName: \"kubernetes.io/projected/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-kube-api-access-2nff9\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnnq\" (UniqueName: \"kubernetes.io/projected/5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e-kube-api-access-wtnnq\") pod \"package-server-manager-789f6589d5-5g4fx\" (UID: \"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619225 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9ec3a749-a236-448b-97f1-4e92cd1ade7f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619253 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619311 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27c5c363-c056-4033-8f87-20e0221d9e04-cert\") pod \"ingress-canary-8tlz8\" (UID: \"27c5c363-c056-4033-8f87-20e0221d9e04\") " pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6463f080-fa44-4fd8-b559-11f5056ffd0a-config-volume\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619395 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55dh\" (UniqueName: \"kubernetes.io/projected/6463f080-fa44-4fd8-b559-11f5056ffd0a-kube-api-access-r55dh\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619421 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec3a749-a236-448b-97f1-4e92cd1ade7f-serving-cert\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486a83f8-907b-441c-aae7-428a6e22d689-srv-cert\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619531 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-node-bootstrap-token\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v22np\" (UniqueName: \"kubernetes.io/projected/27c5c363-c056-4033-8f87-20e0221d9e04-kube-api-access-v22np\") pod \"ingress-canary-8tlz8\" (UID: \"27c5c363-c056-4033-8f87-20e0221d9e04\") " pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-registration-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619727 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9ec3a749-a236-448b-97f1-4e92cd1ade7f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-csi-data-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-socket-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619856 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjdrf\" (UniqueName: \"kubernetes.io/projected/95951ed2-c176-4e12-8dc3-58c9a19e9406-kube-api-access-zjdrf\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64e03bd-fcd5-4b0d-84b9-841278f6560d-config\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-mountpoint-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619928 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619947 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-plugins-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619977 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64e03bd-fcd5-4b0d-84b9-841278f6560d-serving-cert\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.619999 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-certs\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620018 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c64e03bd-fcd5-4b0d-84b9-841278f6560d-trusted-ca\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620046 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2642\" (UniqueName: \"kubernetes.io/projected/9ec3a749-a236-448b-97f1-4e92cd1ade7f-kube-api-access-m2642\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620061 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6463f080-fa44-4fd8-b559-11f5056ffd0a-metrics-tls\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620099 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g4fx\" (UID: \"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620143 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbprj\" (UniqueName: \"kubernetes.io/projected/c64e03bd-fcd5-4b0d-84b9-841278f6560d-kube-api-access-tbprj\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620178 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/486a83f8-907b-441c-aae7-428a6e22d689-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620554 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6463f080-fa44-4fd8-b559-11f5056ffd0a-config-volume\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.620893 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-registration-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.621596 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.622341 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-socket-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.622344 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-plugins-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.622418 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-csi-data-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.622710 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.122686571 +0000 UTC m=+147.524075441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.623347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27c5c363-c056-4033-8f87-20e0221d9e04-cert\") pod \"ingress-canary-8tlz8\" (UID: \"27c5c363-c056-4033-8f87-20e0221d9e04\") " pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.623463 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/95951ed2-c176-4e12-8dc3-58c9a19e9406-mountpoint-dir\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.624336 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.626132 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64e03bd-fcd5-4b0d-84b9-841278f6560d-serving-cert\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.626881 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6463f080-fa44-4fd8-b559-11f5056ffd0a-metrics-tls\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.626887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g4fx\" (UID: \"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.627290 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-node-bootstrap-token\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.649750 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-psx67"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.659735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2ds\" (UniqueName: \"kubernetes.io/projected/aff87238-1a39-4885-a738-2e3a8c674c8c-kube-api-access-gv2ds\") pod \"kube-storage-version-migrator-operator-b67b599dd-jbgwj\" (UID: \"aff87238-1a39-4885-a738-2e3a8c674c8c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.670930 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9zd\" (UniqueName: \"kubernetes.io/projected/987ff7e9-f709-456b-bc00-f029e6a11f4c-kube-api-access-bc9zd\") pod \"service-ca-9c57cc56f-tj94r\" (UID: \"987ff7e9-f709-456b-bc00-f029e6a11f4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.678686 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lf26x"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.694067 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.697647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49nr\" (UniqueName: \"kubernetes.io/projected/389b4065-1992-4508-ae57-601dbfec42b6-kube-api-access-p49nr\") pod \"multus-admission-controller-857f4d67dd-bwphg\" (UID: \"389b4065-1992-4508-ae57-601dbfec42b6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.701464 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.711226 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.715629 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8zkh\" (UniqueName: \"kubernetes.io/projected/9f9c91df-cb75-43ed-9677-9f2409edba07-kube-api-access-j8zkh\") pod \"collect-profiles-29416725-khndp\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.723747 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.724433 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.224409005 +0000 UTC m=+147.625797875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.724904 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.730361 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.733053 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.733291 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2kv\" (UniqueName: \"kubernetes.io/projected/5fd79927-1699-47f7-932e-30961a406e41-kube-api-access-kv2kv\") pod \"cluster-samples-operator-665b6dd947-hpkds\" (UID: \"5fd79927-1699-47f7-932e-30961a406e41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.747859 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgs8\" (UniqueName: \"kubernetes.io/projected/805f46d4-3d04-4bf2-96e9-8f19b24e65e8-kube-api-access-4mgs8\") pod \"catalog-operator-68c6474976-hz8kp\" (UID: \"805f46d4-3d04-4bf2-96e9-8f19b24e65e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.749569 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.767365 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtg54\" (UniqueName: \"kubernetes.io/projected/6ebeda1e-dfdd-42d2-8359-32902e14273b-kube-api-access-jtg54\") pod \"migrator-59844c95c7-jt7gr\" (UID: \"6ebeda1e-dfdd-42d2-8359-32902e14273b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.787033 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.787415 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.789702 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tw4\" (UniqueName: \"kubernetes.io/projected/8a8b089f-b74b-4d12-ad6f-0e611b078120-kube-api-access-44tw4\") pod \"openshift-controller-manager-operator-756b6f6bc6-plggm\" (UID: \"8a8b089f-b74b-4d12-ad6f-0e611b078120\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.804738 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.811305 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ds5\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-kube-api-access-w6ds5\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.821052 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.825773 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.825962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.826049 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.826430 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.326408819 +0000 UTC m=+147.727797689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.827667 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxbc\" (UniqueName: \"kubernetes.io/projected/210b89a8-666c-4aab-a64d-e37987eed3f0-kube-api-access-phxbc\") pod \"apiserver-7bbb656c7d-r7x6r\" (UID: \"210b89a8-666c-4aab-a64d-e37987eed3f0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.829458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.850403 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlfx9\" (UniqueName: \"kubernetes.io/projected/dfdea3f4-194e-49e8-88f2-4170d677cee9-kube-api-access-zlfx9\") pod \"etcd-operator-b45778765-g8svq\" (UID: \"dfdea3f4-194e-49e8-88f2-4170d677cee9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.855405 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.868401 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czddh\" (UniqueName: \"kubernetes.io/projected/89296d2e-cc06-49b2-98d3-61fbf8ac3a77-kube-api-access-czddh\") pod \"dns-operator-744455d44c-4pfdd\" (UID: \"89296d2e-cc06-49b2-98d3-61fbf8ac3a77\") " pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.886921 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfaeafa9-95c8-4064-b167-e3a3e56790c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cgfcs\" (UID: \"dfaeafa9-95c8-4064-b167-e3a3e56790c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.910105 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/16ca85ad-de88-4503-b27c-cfeaa96ae436-kube-api-access-tvqv5\") pod \"route-controller-manager-6576b87f9c-rhs4b\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.917911 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.918089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486a83f8-907b-441c-aae7-428a6e22d689-srv-cert\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.917951 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64e03bd-fcd5-4b0d-84b9-841278f6560d-config\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.918108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec3a749-a236-448b-97f1-4e92cd1ade7f-serving-cert\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.918768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c64e03bd-fcd5-4b0d-84b9-841278f6560d-trusted-ca\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.918775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/486a83f8-907b-441c-aae7-428a6e22d689-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.921184 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.927523 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.927920 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.427891007 +0000 UTC m=+147.829279887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.928286 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.928551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.928742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:44 crc kubenswrapper[4895]: E1206 06:59:44.928840 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.428812564 +0000 UTC m=+147.830201624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.929511 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.930262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmnxq\" (UniqueName: \"kubernetes.io/projected/90cdf070-0b48-4632-ad95-c1dc562a00aa-kube-api-access-dmnxq\") pod \"machine-approver-56656f9798-fgg4h\" (UID: \"90cdf070-0b48-4632-ad95-c1dc562a00aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.934336 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-certs\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.936122 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" Dec 06 06:59:44 crc kubenswrapper[4895]: W1206 06:59:44.939099 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd9c2886_00ed_443b_8706_8157ab88a96c.slice/crio-edd282ea25eb19dd58e84ce005d6645ef74d5fa752e0d6575b2e601d6649daf0 WatchSource:0}: Error finding container edd282ea25eb19dd58e84ce005d6645ef74d5fa752e0d6575b2e601d6649daf0: Status 404 returned error can't find the container with id edd282ea25eb19dd58e84ce005d6645ef74d5fa752e0d6575b2e601d6649daf0 Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.949323 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.949786 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.953486 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.958270 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-bound-sa-token\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.968313 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.971780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd5ws\" (UniqueName: \"kubernetes.io/projected/688a9d65-4700-47fb-a150-723f9c21b054-kube-api-access-rd5ws\") pod \"oauth-openshift-558db77b4-ggcv2\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.972684 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd"] Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.978849 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.984036 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" event={"ID":"cd9c2886-00ed-443b-8706-8157ab88a96c","Type":"ContainerStarted","Data":"edd282ea25eb19dd58e84ce005d6645ef74d5fa752e0d6575b2e601d6649daf0"} Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.985771 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" event={"ID":"fbde9406-9da6-43ea-b1e7-b8638e8d0351","Type":"ContainerStarted","Data":"715633f35e9d93dacaebadc0d0bd53a9361204159d17de2a479c78bd7153b608"} Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.986891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerStarted","Data":"06e98f62be8d971557eaf82d11c83af6366c980308b310cbd2734998858bc8f6"} Dec 06 06:59:44 crc kubenswrapper[4895]: W1206 06:59:44.986941 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc66ecc_018b_48fe_beac_ddf62239c291.slice/crio-7a548b261088892e7264174918286f9be739505f24604950112709b849c95883 WatchSource:0}: Error finding container 7a548b261088892e7264174918286f9be739505f24604950112709b849c95883: Status 404 returned error can't find the container with id 7a548b261088892e7264174918286f9be739505f24604950112709b849c95883 Dec 06 06:59:44 crc kubenswrapper[4895]: I1206 06:59:44.988102 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" event={"ID":"b7140b73-ea35-4be4-90b5-eaa3aa946785","Type":"ContainerStarted","Data":"99ce4b305700f374c3fef6967fe93fc16725800db95a40779d252a6feab5de50"} Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.004781 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrbd\" (UniqueName: \"kubernetes.io/projected/f740e915-a41a-4bfb-a4fa-1b33903fecd6-kube-api-access-5wrbd\") pod \"downloads-7954f5f757-lddxp\" (UID: \"f740e915-a41a-4bfb-a4fa-1b33903fecd6\") " pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.005186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" event={"ID":"6b01fabb-ca2e-4820-9b1c-b821a9bf4084","Type":"ContainerStarted","Data":"2032a5d49ed93dbbc4dd2927abc4347c1316c89c6929cd1d7a6a0feac6d07bd3"} Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.010231 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" event={"ID":"4b39f78c-cc46-4be7-89b9-a7503dec0a10","Type":"ContainerStarted","Data":"3d2b589163d6d4398e292b363b19f1081eda33789d816b9eb66c536f28270b72"} Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.013372 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2xrjt" event={"ID":"c2248ee7-0953-48b0-bcaf-e95d8560c4b6","Type":"ContainerStarted","Data":"e614166037a4a72520ca5239f4465f5642ebd2823c9476ebb1c454e36a90ca91"} Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.013410 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2xrjt" event={"ID":"c2248ee7-0953-48b0-bcaf-e95d8560c4b6","Type":"ContainerStarted","Data":"9c6c05b8926f32fa0953e25fc4f531a8899184fb4be9eee8c1002b3191e1e002"} Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.015936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2l4n\" (UniqueName: \"kubernetes.io/projected/b6966575-116d-4708-a4dc-4aa061e9b665-kube-api-access-g2l4n\") pod \"machine-config-controller-84d6567774-qv9gv\" (UID: \"b6966575-116d-4708-a4dc-4aa061e9b665\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.030560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.031129 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.531089996 +0000 UTC m=+147.932478876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.031391 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv87g\" (UniqueName: \"kubernetes.io/projected/7c167c9f-5de9-44d9-8367-56975116a496-kube-api-access-jv87g\") pod \"packageserver-d55dfcdfc-994mt\" (UID: \"7c167c9f-5de9-44d9-8367-56975116a496\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.040920 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.084958 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.089564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcl5q\" (UniqueName: \"kubernetes.io/projected/486a83f8-907b-441c-aae7-428a6e22d689-kube-api-access-mcl5q\") pod \"olm-operator-6b444d44fb-mvldw\" (UID: \"486a83f8-907b-441c-aae7-428a6e22d689\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.095852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.098039 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnnq\" (UniqueName: \"kubernetes.io/projected/5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e-kube-api-access-wtnnq\") pod \"package-server-manager-789f6589d5-5g4fx\" (UID: \"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.111936 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.129683 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.133677 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nff9\" (UniqueName: \"kubernetes.io/projected/5f1ba1fe-57e7-45af-abc1-79dbb564f3b0-kube-api-access-2nff9\") pod \"machine-config-server-6hhlt\" (UID: \"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0\") " pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.133913 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.134368 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.634348488 +0000 UTC m=+148.035737358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.139516 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-frjjl\" (UID: \"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.154971 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.159533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v22np\" (UniqueName: \"kubernetes.io/projected/27c5c363-c056-4033-8f87-20e0221d9e04-kube-api-access-v22np\") pod \"ingress-canary-8tlz8\" (UID: \"27c5c363-c056-4033-8f87-20e0221d9e04\") " pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.170048 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.190422 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8tlz8" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.191078 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.191446 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.197775 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.213385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6hhlt" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.214569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjdrf\" (UniqueName: \"kubernetes.io/projected/95951ed2-c176-4e12-8dc3-58c9a19e9406-kube-api-access-zjdrf\") pod \"csi-hostpathplugin-kfwsb\" (UID: \"95951ed2-c176-4e12-8dc3-58c9a19e9406\") " pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.215823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55dh\" (UniqueName: \"kubernetes.io/projected/6463f080-fa44-4fd8-b559-11f5056ffd0a-kube-api-access-r55dh\") pod \"dns-default-sqdms\" (UID: \"6463f080-fa44-4fd8-b559-11f5056ffd0a\") " pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.227778 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2642\" (UniqueName: \"kubernetes.io/projected/9ec3a749-a236-448b-97f1-4e92cd1ade7f-kube-api-access-m2642\") pod \"openshift-config-operator-7777fb866f-b9khb\" (UID: \"9ec3a749-a236-448b-97f1-4e92cd1ade7f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.228221 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.235904 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.236190 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.736146686 +0000 UTC m=+148.137535556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.236736 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.237317 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.737296671 +0000 UTC m=+148.138685541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.244391 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.246773 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbprj\" (UniqueName: \"kubernetes.io/projected/c64e03bd-fcd5-4b0d-84b9-841278f6560d-kube-api-access-tbprj\") pod \"console-operator-58897d9998-qnrd2\" (UID: \"c64e03bd-fcd5-4b0d-84b9-841278f6560d\") " pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.294756 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:45 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:45 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:45 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.294833 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.310040 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gjt8r"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.316292 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.340680 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.340913 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.840855093 +0000 UTC m=+148.242243963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.341492 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.341991 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.841966646 +0000 UTC m=+148.243355526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.437883 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.442851 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.443294 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:45.943260368 +0000 UTC m=+148.344649238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.449245 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.461614 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.478221 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.508060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.544845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.545308 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.045290723 +0000 UTC m=+148.446679593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.617262 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.645773 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.646568 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.14643835 +0000 UTC m=+148.547827220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.730380 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.748262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.748733 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.248717851 +0000 UTC m=+148.650106721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: W1206 06:59:45.832442 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5de77d1_fbbb_4108_adc9_bf13d86dca4a.slice/crio-de59d31f76add22b36ae41154965ba3fd7fbdc63f14a74a32a72ddddc5beb6cf WatchSource:0}: Error finding container de59d31f76add22b36ae41154965ba3fd7fbdc63f14a74a32a72ddddc5beb6cf: Status 404 returned error can't find the container with id de59d31f76add22b36ae41154965ba3fd7fbdc63f14a74a32a72ddddc5beb6cf Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.850641 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.850938 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.35089814 +0000 UTC m=+148.752287020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.851099 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.851524 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.351505508 +0000 UTC m=+148.752894378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: W1206 06:59:45.871367 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfaeafa9_95c8_4064_b167_e3a3e56790c8.slice/crio-2932fbe6eb6dfabeb2d676cf80378a6e08db6dae5019f720aedc7892af1ab4e9 WatchSource:0}: Error finding container 2932fbe6eb6dfabeb2d676cf80378a6e08db6dae5019f720aedc7892af1ab4e9: Status 404 returned error can't find the container with id 2932fbe6eb6dfabeb2d676cf80378a6e08db6dae5019f720aedc7892af1ab4e9 Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.943325 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.945308 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.951260 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-52v65"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.951934 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:45 crc kubenswrapper[4895]: E1206 06:59:45.958428 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.458347641 +0000 UTC m=+148.859736521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.963003 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tj94r"] Dec 06 06:59:45 crc kubenswrapper[4895]: I1206 06:59:45.979178 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2f6hz"] Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.015296 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2xrjt" podStartSLOduration=128.015266599 podStartE2EDuration="2m8.015266599s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:45.995189923 +0000 UTC m=+148.396578793" watchObservedRunningTime="2025-12-06 06:59:46.015266599 +0000 UTC m=+148.416655489" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.017733 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp"] Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.061285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.078996 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.578968666 +0000 UTC m=+148.980357536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.135249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" event={"ID":"dfaeafa9-95c8-4064-b167-e3a3e56790c8","Type":"ContainerStarted","Data":"2932fbe6eb6dfabeb2d676cf80378a6e08db6dae5019f720aedc7892af1ab4e9"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.204699 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.205557 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.705535794 +0000 UTC m=+149.106924664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.286937 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds"] Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.309778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.310239 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.8102168 +0000 UTC m=+149.211605670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: W1206 06:59:46.310891 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc5343c_2ae2_4edf_b975_96cc492ca434.slice/crio-1ecabecd608edcba1ba73ca12e6f84bc9f07025bd38cf2f95f6c4912666e4311 WatchSource:0}: Error finding container 1ecabecd608edcba1ba73ca12e6f84bc9f07025bd38cf2f95f6c4912666e4311: Status 404 returned error can't find the container with id 1ecabecd608edcba1ba73ca12e6f84bc9f07025bd38cf2f95f6c4912666e4311 Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.318989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerStarted","Data":"bc9df5aef41004da062d982e12bc5f1d5872d255d54499027d180eaf7cf067de"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.320251 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.332690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6hhlt" event={"ID":"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0","Type":"ContainerStarted","Data":"8af83db750f696f8ac5d8c9e25122a05780d8fdbc4424af42f20c658d95c1719"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.353096 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" event={"ID":"6b01fabb-ca2e-4820-9b1c-b821a9bf4084","Type":"ContainerStarted","Data":"718d98488d0b351413a27554b39175dae1e62e3141a168398eed13133f8ac5ea"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.359180 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" event={"ID":"ee4a193d-89fb-4c16-9aed-3c5868b417c3","Type":"ContainerStarted","Data":"1b521e8604e41e66f9cbf1315773a98918a7bc6dd31fe94158a9e960234e6c87"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.378724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" event={"ID":"e5de77d1-fbbb-4108-adc9-bf13d86dca4a","Type":"ContainerStarted","Data":"de59d31f76add22b36ae41154965ba3fd7fbdc63f14a74a32a72ddddc5beb6cf"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.397080 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r"] Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.400164 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" event={"ID":"b7140b73-ea35-4be4-90b5-eaa3aa946785","Type":"ContainerStarted","Data":"3e65cbc4c299907edc29662fa1e49b9041807ea4fd2e62ff249b1b08e2f6092a"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.403527 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" event={"ID":"90cdf070-0b48-4632-ad95-c1dc562a00aa","Type":"ContainerStarted","Data":"21234a13d35a04d49fde88f719aaac881c445acd330b5bf60241b82ad2e67d8f"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.412899 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.414099 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:46.91408064 +0000 UTC m=+149.315469510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.423007 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" event={"ID":"9dc66ecc-018b-48fe-beac-ddf62239c291","Type":"ContainerStarted","Data":"ec02cc0ca63db8b597370eaa9c819c260848240720d62b1adc091bfbf1afe058"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.423075 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" event={"ID":"9dc66ecc-018b-48fe-beac-ddf62239c291","Type":"ContainerStarted","Data":"7a548b261088892e7264174918286f9be739505f24604950112709b849c95883"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.429696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" event={"ID":"fbde9406-9da6-43ea-b1e7-b8638e8d0351","Type":"ContainerStarted","Data":"ec6cdaa0d40d2b32b759a3bdc57aa3fb3e9195a072803a4cc914e3ea5d62df4f"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.430338 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.446843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" event={"ID":"4b39f78c-cc46-4be7-89b9-a7503dec0a10","Type":"ContainerStarted","Data":"e02f9447954902a01f1c991c3bc512a5bbd4c4dad85b21985c7f4f4ff1711d44"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.456833 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" event={"ID":"80151d03-97c4-44e3-be46-169472298c7e","Type":"ContainerStarted","Data":"7d654d32a7feab897922f1db297d8485f16a913d76b0fd25f11f3d283c16bc23"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.456930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" event={"ID":"80151d03-97c4-44e3-be46-169472298c7e","Type":"ContainerStarted","Data":"9e06347e9d6ff69db2c4e82893fefcbc470cbfce3d3fac5fbe76ee10233f2e98"} Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.515514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.517346 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.017330762 +0000 UTC m=+149.418719632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.526146 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bnn9x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/healthz\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.526209 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.8:8080/healthz\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.538740 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.617232 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.617637 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.117604522 +0000 UTC m=+149.518993392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.689759 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-lf26x" podStartSLOduration=129.689734988 podStartE2EDuration="2m9.689734988s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:46.689383368 +0000 UTC m=+149.090772258" watchObservedRunningTime="2025-12-06 06:59:46.689734988 +0000 UTC m=+149.091123858" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.719195 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.719803 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.219785251 +0000 UTC m=+149.621174121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.776878 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" podStartSLOduration=129.776854734 podStartE2EDuration="2m9.776854734s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:46.747919576 +0000 UTC m=+149.149308466" watchObservedRunningTime="2025-12-06 06:59:46.776854734 +0000 UTC m=+149.178243604" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.804813 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" podStartSLOduration=128.804790202 podStartE2EDuration="2m8.804790202s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:46.777972549 +0000 UTC m=+149.179361429" watchObservedRunningTime="2025-12-06 06:59:46.804790202 +0000 UTC m=+149.206179072" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.820065 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.820532 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.320508155 +0000 UTC m=+149.721897025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.850109 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2wjg6" podStartSLOduration=129.850087413 podStartE2EDuration="2m9.850087413s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:46.848927419 +0000 UTC m=+149.250316309" watchObservedRunningTime="2025-12-06 06:59:46.850087413 +0000 UTC m=+149.251476303" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.852107 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rrcdd" podStartSLOduration=128.852095606 podStartE2EDuration="2m8.852095606s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:46.806942429 +0000 UTC m=+149.208331299" watchObservedRunningTime="2025-12-06 06:59:46.852095606 +0000 UTC m=+149.253484486" Dec 06 06:59:46 crc kubenswrapper[4895]: I1206 06:59:46.922412 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:46 crc kubenswrapper[4895]: E1206 06:59:46.922926 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.422906311 +0000 UTC m=+149.824295181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.024076 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.024359 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.524314866 +0000 UTC m=+149.925703736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.024763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.025181 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.525173113 +0000 UTC m=+149.926561983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.125875 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.126525 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.626505495 +0000 UTC m=+150.027894365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.230101 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.230624 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.730607713 +0000 UTC m=+150.131996583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.259828 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.259931 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.300398 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:47 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:47 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:47 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.300944 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.335651 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.336550 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.836496076 +0000 UTC m=+150.237884956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.438085 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.438803 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:47.938784958 +0000 UTC m=+150.340173828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.489638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" event={"ID":"fcc5343c-2ae2-4edf-b975-96cc492ca434","Type":"ContainerStarted","Data":"1ecabecd608edcba1ba73ca12e6f84bc9f07025bd38cf2f95f6c4912666e4311"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.502167 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm"] Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.520452 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" event={"ID":"cd9c2886-00ed-443b-8706-8157ab88a96c","Type":"ContainerStarted","Data":"7c46005739173a3bcd960eb0cf83de8438cd03554e9508e3b266373d5c78d2a5"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.535148 4895 generic.go:334] "Generic (PLEG): container finished" podID="ee4a193d-89fb-4c16-9aed-3c5868b417c3" containerID="eb0ff8920ea27498cede0d183bc38c353d83af4c44a03a1442ef075430c595f5" exitCode=0 Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.535253 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" event={"ID":"ee4a193d-89fb-4c16-9aed-3c5868b417c3","Type":"ContainerDied","Data":"eb0ff8920ea27498cede0d183bc38c353d83af4c44a03a1442ef075430c595f5"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.539004 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.539575 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.039547203 +0000 UTC m=+150.440936073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.563270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6hhlt" event={"ID":"5f1ba1fe-57e7-45af-abc1-79dbb564f3b0","Type":"ContainerStarted","Data":"8accbf7c68a53f81b5fb01cb2cf06fd2393950005204a43b6d62ffacc22242cc"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.574279 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwz9c" podStartSLOduration=129.57425251 podStartE2EDuration="2m9.57425251s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:47.559358882 +0000 UTC m=+149.960747752" watchObservedRunningTime="2025-12-06 06:59:47.57425251 +0000 UTC m=+149.975641370" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.595781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" event={"ID":"4b39f78c-cc46-4be7-89b9-a7503dec0a10","Type":"ContainerStarted","Data":"1e940e905d778ef752ccd312d7b7d911449979f26df477a8a9b70c156497ee38"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.643091 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.644105 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.144088495 +0000 UTC m=+150.545477365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.651764 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" event={"ID":"e5de77d1-fbbb-4108-adc9-bf13d86dca4a","Type":"ContainerStarted","Data":"98d1c031395032b15c1c431a7b2ea95a3ed43e21282159c8d606ac00e3f35774"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.710440 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-psx67" podStartSLOduration=129.710414082 podStartE2EDuration="2m9.710414082s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:47.710257657 +0000 UTC m=+150.111646527" watchObservedRunningTime="2025-12-06 06:59:47.710414082 +0000 UTC m=+150.111802952" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.727817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" event={"ID":"987ff7e9-f709-456b-bc00-f029e6a11f4c","Type":"ContainerStarted","Data":"d8d638ecc549f09b408d967c8ce0f5b2a6af6fb76ed8bddf6799a3065dd710fb"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.746884 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.748986 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.248951095 +0000 UTC m=+150.650340135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.759266 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6hhlt" podStartSLOduration=5.759238192 podStartE2EDuration="5.759238192s" podCreationTimestamp="2025-12-06 06:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:47.757310762 +0000 UTC m=+150.158699642" watchObservedRunningTime="2025-12-06 06:59:47.759238192 +0000 UTC m=+150.160627062" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.786980 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6k8wz" podStartSLOduration=129.786955524 podStartE2EDuration="2m9.786955524s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:47.786354225 +0000 UTC m=+150.187743095" watchObservedRunningTime="2025-12-06 06:59:47.786955524 +0000 UTC m=+150.188344394" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.797861 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5487ec5af28c8f1de63ae3b22b8a647b0894e58a777f2b37e6b74e579adee789"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.818865 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" event={"ID":"b7140b73-ea35-4be4-90b5-eaa3aa946785","Type":"ContainerStarted","Data":"854f8df5dd0cdd918ab2efd32b4b7de0e2712f3ca683d7871ca96e1862bdf2bc"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.841940 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" event={"ID":"210b89a8-666c-4aab-a64d-e37987eed3f0","Type":"ContainerStarted","Data":"14f1d037c693c3cf58344ad2370ac56d57ffc76b256139d568dba68adebdb039"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.851509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.852972 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.352952311 +0000 UTC m=+150.754341181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.857914 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" event={"ID":"9f9c91df-cb75-43ed-9677-9f2409edba07","Type":"ContainerStarted","Data":"a352ba67e26f113fbae1c86deb376b14fca8a8c470ba40490fd40454db411a86"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.913895 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-smwwm" podStartSLOduration=129.913868622 podStartE2EDuration="2m9.913868622s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:47.872375998 +0000 UTC m=+150.273764878" watchObservedRunningTime="2025-12-06 06:59:47.913868622 +0000 UTC m=+150.315257512" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.916297 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" podStartSLOduration=130.916275206 podStartE2EDuration="2m10.916275206s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:47.915719789 +0000 UTC m=+150.317108659" watchObservedRunningTime="2025-12-06 06:59:47.916275206 +0000 UTC m=+150.317664076" Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.921530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" event={"ID":"16ca85ad-de88-4503-b27c-cfeaa96ae436","Type":"ContainerStarted","Data":"a1abc63851e965b11004ea27a8a78eb004c31499444a2fdfc2a847f277047ee6"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.945737 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-52v65" event={"ID":"3864596e-56f8-46a1-95e6-3558c161cd02","Type":"ContainerStarted","Data":"cf098386792cbcf5172ec12d6de2bfc9397cf076fd6d9c62a238811591570614"} Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.952871 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:47 crc kubenswrapper[4895]: E1206 06:59:47.953592 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.453571862 +0000 UTC m=+150.854960732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:47 crc kubenswrapper[4895]: I1206 06:59:47.988717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" event={"ID":"aff87238-1a39-4885-a738-2e3a8c674c8c","Type":"ContainerStarted","Data":"43cbaeb7cd181a15f17df35454a13df0ee4615e392deb19fa770abd2a2283619"} Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.009149 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.031426 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" podStartSLOduration=130.031406562 podStartE2EDuration="2m10.031406562s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:48.028690869 +0000 UTC m=+150.430079739" watchObservedRunningTime="2025-12-06 06:59:48.031406562 +0000 UTC m=+150.432795432" Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.074028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.076304 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.576285101 +0000 UTC m=+150.977673971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.193691 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.194375 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.694355739 +0000 UTC m=+151.095744609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.236995 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:48 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:48 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:48 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.237612 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.295407 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.295841 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.795823795 +0000 UTC m=+151.197212665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.373335 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.396855 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.409724 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:48.897459318 +0000 UTC m=+151.298848188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.500277 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.502188 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.002172213 +0000 UTC m=+151.403561083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.603102 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.603922 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.103897689 +0000 UTC m=+151.505286559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.706337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.706868 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.206847252 +0000 UTC m=+151.608236122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.809832 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.810350 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.3103124 +0000 UTC m=+151.711701280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.912723 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:48 crc kubenswrapper[4895]: E1206 06:59:48.913130 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.413114467 +0000 UTC m=+151.814503347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:48 crc kubenswrapper[4895]: I1206 06:59:48.949032 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.001907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jbgwj" event={"ID":"aff87238-1a39-4885-a738-2e3a8c674c8c","Type":"ContainerStarted","Data":"e8b8e17f6c6792d781c035052e07112a3482937accf18e134e9727e2bc360508"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.015401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.016119 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.516100181 +0000 UTC m=+151.917489051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.041015 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" event={"ID":"dfaeafa9-95c8-4064-b167-e3a3e56790c8","Type":"ContainerStarted","Data":"c2a00ebdf9f7aee1a855418d858a2d68f7e638ffada006aa6e8ce01429bf9d32"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.067209 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4pfdd"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.082557 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.088334 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.093179 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.093896 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" event={"ID":"fcc5343c-2ae2-4edf-b975-96cc492ca434","Type":"ContainerStarted","Data":"8770e701299fb52e3a3493fd80cfc661781933073a66297aca6c8cb725e78d7a"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.093953 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" event={"ID":"fcc5343c-2ae2-4edf-b975-96cc492ca434","Type":"ContainerStarted","Data":"72d16d0b3696cb8c8836ac724e9b16c4396f0cb82a937f5b96fb6865e503216b"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.100562 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bwphg"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.116950 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b9khb"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.118939 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.121267 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.621249892 +0000 UTC m=+152.022638762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.144218 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cgfcs" podStartSLOduration=131.143786354 podStartE2EDuration="2m11.143786354s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.128314339 +0000 UTC m=+151.529703219" watchObservedRunningTime="2025-12-06 06:59:49.143786354 +0000 UTC m=+151.545175224" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.158925 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" event={"ID":"16ca85ad-de88-4503-b27c-cfeaa96ae436","Type":"ContainerStarted","Data":"acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.160939 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.177784 4895 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rhs4b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.186726 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.216383 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" event={"ID":"987ff7e9-f709-456b-bc00-f029e6a11f4c","Type":"ContainerStarted","Data":"bfdb9c042b0e836674d7064a911ff3817a24e4cb731d5de474a89242ed8827f0"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.225191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.226777 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.726754312 +0000 UTC m=+152.128143172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.227007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.231621 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.731606111 +0000 UTC m=+152.132994981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.261621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6bac359ccc21ed17572469b651eb7113de4f887f3dde096da42ccdcd63f12049"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.262436 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.275160 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:49 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:49 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:49 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.275232 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.313916 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.313985 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.318279 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2f6hz" podStartSLOduration=131.318252253 podStartE2EDuration="2m11.318252253s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.232396946 +0000 UTC m=+151.633785816" watchObservedRunningTime="2025-12-06 06:59:49.318252253 +0000 UTC m=+151.719641123" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.319858 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g8svq"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.323643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" event={"ID":"8a8b089f-b74b-4d12-ad6f-0e611b078120","Type":"ContainerStarted","Data":"3d6d3887b1dc9bb25b5daa0dab604e51c2e81c861fcb7a7cd955c044d2cb2161"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.323731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" event={"ID":"8a8b089f-b74b-4d12-ad6f-0e611b078120","Type":"ContainerStarted","Data":"7cfe37ebf297fab8add1e91f45f6c120adca0ea5761158927b5822c83052f851"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.328864 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.330974 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.830953933 +0000 UTC m=+152.232342803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.333668 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ggcv2"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.335140 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" podStartSLOduration=131.335121441 podStartE2EDuration="2m11.335121441s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.311813325 +0000 UTC m=+151.713202215" watchObservedRunningTime="2025-12-06 06:59:49.335121441 +0000 UTC m=+151.736510311" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.349321 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lddxp"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.349863 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" event={"ID":"90cdf070-0b48-4632-ad95-c1dc562a00aa","Type":"ContainerStarted","Data":"f02bfa790925b235267fd33a5767531285fc00b381f54dee435b50735e4eb14d"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.349904 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" event={"ID":"90cdf070-0b48-4632-ad95-c1dc562a00aa","Type":"ContainerStarted","Data":"41273fa9df5f4a2f314492ea05c614cabd79b954b9ed78fccbf72b968b7df7b5"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.357957 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tj94r" podStartSLOduration=131.357930602 podStartE2EDuration="2m11.357930602s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.349020248 +0000 UTC m=+151.750409118" watchObservedRunningTime="2025-12-06 06:59:49.357930602 +0000 UTC m=+151.759319462" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.376288 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" event={"ID":"9f9c91df-cb75-43ed-9677-9f2409edba07","Type":"ContainerStarted","Data":"2c9ca7857cdec08dcb4421458f568c475f4bf791fd65f964c728bc638a73b2d7"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.430800 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.431911 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:49.931887073 +0000 UTC m=+152.333275943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.447623 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sqdms"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.456763 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.460689 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-plggm" podStartSLOduration=131.460658027 podStartE2EDuration="2m11.460658027s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.440957072 +0000 UTC m=+151.842345942" watchObservedRunningTime="2025-12-06 06:59:49.460658027 +0000 UTC m=+151.862046897" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.462748 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" event={"ID":"ee4a193d-89fb-4c16-9aed-3c5868b417c3","Type":"ContainerStarted","Data":"bf8cc37c0671056e2c3d0f1c178af540088cbdcd89a9cc3c897342281d82ef2b"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.502801 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgg4h" podStartSLOduration=132.502779531 podStartE2EDuration="2m12.502779531s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.50109441 +0000 UTC m=+151.902483280" watchObservedRunningTime="2025-12-06 06:59:49.502779531 +0000 UTC m=+151.904168401" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.510968 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8tlz8"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.517299 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-52v65" event={"ID":"3864596e-56f8-46a1-95e6-3558c161cd02","Type":"ContainerStarted","Data":"af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.532320 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.532748 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.032724761 +0000 UTC m=+152.434113631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.533116 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qnrd2"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.583938 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.611385 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kfwsb"] Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.628279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" event={"ID":"5fd79927-1699-47f7-932e-30961a406e41","Type":"ContainerStarted","Data":"c2dd972142babb9223bac80a9d0aedf396873570e57e98a37b7bc5545da57f0c"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.628452 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" event={"ID":"5fd79927-1699-47f7-932e-30961a406e41","Type":"ContainerStarted","Data":"96423bc01d56170a5d203a1143e51cd05d5a09ca134d5b1f1524e950e1b992f7"} Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.635180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:49 crc kubenswrapper[4895]: W1206 06:59:49.648149 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95951ed2_c176_4e12_8dc3_58c9a19e9406.slice/crio-1e69a11dd35e2541e1cb8210f75c6ee49b2a014d195b9da0edc0e73284c56e4a WatchSource:0}: Error finding container 1e69a11dd35e2541e1cb8210f75c6ee49b2a014d195b9da0edc0e73284c56e4a: Status 404 returned error can't find the container with id 1e69a11dd35e2541e1cb8210f75c6ee49b2a014d195b9da0edc0e73284c56e4a Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.648769 4895 generic.go:334] "Generic (PLEG): container finished" podID="210b89a8-666c-4aab-a64d-e37987eed3f0" containerID="e18a15cbf7328e6fc78a36edba4d88bca255e7aefb64e74496328a8fe77fb323" exitCode=0 Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.650268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" event={"ID":"210b89a8-666c-4aab-a64d-e37987eed3f0","Type":"ContainerDied","Data":"e18a15cbf7328e6fc78a36edba4d88bca255e7aefb64e74496328a8fe77fb323"} Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.656291 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.156272777 +0000 UTC m=+152.557661647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.664753 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-52v65" podStartSLOduration=131.664724196 podStartE2EDuration="2m11.664724196s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.620976782 +0000 UTC m=+152.022365652" watchObservedRunningTime="2025-12-06 06:59:49.664724196 +0000 UTC m=+152.066113066" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.690955 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" podStartSLOduration=132.69092415 podStartE2EDuration="2m12.69092415s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:49.685576047 +0000 UTC m=+152.086964917" watchObservedRunningTime="2025-12-06 06:59:49.69092415 +0000 UTC m=+152.092313030" Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.736368 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.739289 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.239258846 +0000 UTC m=+152.640647716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.844362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.845875 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.34585872 +0000 UTC m=+152.747247680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:49 crc kubenswrapper[4895]: I1206 06:59:49.951924 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:49 crc kubenswrapper[4895]: E1206 06:59:49.952285 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.452266169 +0000 UTC m=+152.853655039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.052875 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.053571 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.55354177 +0000 UTC m=+152.954930640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.156218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.156673 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.656651677 +0000 UTC m=+153.058040547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.233514 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:50 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:50 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:50 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.233598 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.262584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.263013 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.762998484 +0000 UTC m=+153.164387354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.363590 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.363823 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.863801601 +0000 UTC m=+153.265190471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.363866 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.364301 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.864291955 +0000 UTC m=+153.265680825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.468363 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.468675 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.968639461 +0000 UTC m=+153.370028331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.469439 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.469903 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:50.96988295 +0000 UTC m=+153.371271820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.570387 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.570968 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.070944954 +0000 UTC m=+153.472333834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.672717 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.673195 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.173178474 +0000 UTC m=+153.574567344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.680110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" event={"ID":"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc","Type":"ContainerStarted","Data":"32c69152cdf14d98d338c5d4ef83066f840242ed6781c1a5d25ae8dcf319aaf1"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.692195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" event={"ID":"ee4a193d-89fb-4c16-9aed-3c5868b417c3","Type":"ContainerStarted","Data":"059811f4e959d247a568141d0c9b4bc459a5118ade9078dc7ecb7b9a316a80a0"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.699290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" event={"ID":"486a83f8-907b-441c-aae7-428a6e22d689","Type":"ContainerStarted","Data":"1f7f6209182eb891f15261245c600bb16f8205df6c02912b95e36f935edf981e"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.726322 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" podStartSLOduration=133.726297696 podStartE2EDuration="2m13.726297696s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:50.725162241 +0000 UTC m=+153.126551121" watchObservedRunningTime="2025-12-06 06:59:50.726297696 +0000 UTC m=+153.127686566" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.772071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" event={"ID":"89296d2e-cc06-49b2-98d3-61fbf8ac3a77","Type":"ContainerStarted","Data":"627ad361248f4c9858a6c1ad478acd79617153b85106f1530ca8e23072645748"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.772158 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" event={"ID":"89296d2e-cc06-49b2-98d3-61fbf8ac3a77","Type":"ContainerStarted","Data":"de940f1c9ad3f244bd1dc3d38a4f8de1dc6a373a701d1c02fbfeb1e74733bcc0"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.774361 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.775952 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.27592998 +0000 UTC m=+153.677318850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.789199 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sqdms" event={"ID":"6463f080-fa44-4fd8-b559-11f5056ffd0a","Type":"ContainerStarted","Data":"854c51edcdf3d1fc7fd4602602e8eec99edbe1b299231774358e2d2e81ab4263"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.807304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" event={"ID":"a3ce3943-7a02-46e7-bf84-30d30080b111","Type":"ContainerStarted","Data":"ac3d27330c97dd7ae12b4be2f27e80411a31de33c9d4cee739b0e194b2a8e7ca"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.807373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" event={"ID":"a3ce3943-7a02-46e7-bf84-30d30080b111","Type":"ContainerStarted","Data":"6a01312fe71c59c5b6237f9129a6deb6de25f0413dcb3b3c37daba14ec673e0d"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.825741 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" event={"ID":"805f46d4-3d04-4bf2-96e9-8f19b24e65e8","Type":"ContainerStarted","Data":"457853fed450516534665533bd20a52acd1a5d55f008bcaecbf474b2d32bbf2f"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.825804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" event={"ID":"805f46d4-3d04-4bf2-96e9-8f19b24e65e8","Type":"ContainerStarted","Data":"a1bd54a030998ab9bf525dfba384f6e06f0ef5eff2f95aa9f8a0abf12e0df5ab"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.827012 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.829672 4895 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hz8kp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.829753 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" podUID="805f46d4-3d04-4bf2-96e9-8f19b24e65e8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.836884 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9mczr" podStartSLOduration=132.836860222 podStartE2EDuration="2m12.836860222s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:50.834924523 +0000 UTC m=+153.236313403" watchObservedRunningTime="2025-12-06 06:59:50.836860222 +0000 UTC m=+153.238249092" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.853716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" event={"ID":"389b4065-1992-4508-ae57-601dbfec42b6","Type":"ContainerStarted","Data":"6d60a41c66a8568b852d198e03df1aa3f71e9af9087302e19e78d44ddcbe4da7"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.865723 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ba8010a4b08900b46dfa7209ab4eca7267b4a5d0e44f1aa3b28cd4c096affd5e"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.875507 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.877884 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.377868532 +0000 UTC m=+153.779257392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.888954 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" event={"ID":"b6966575-116d-4708-a4dc-4aa061e9b665","Type":"ContainerStarted","Data":"c23440aee7765a743bbd8c362d9be8299cabcaf8c70eddcebfd1e90f62eacc72"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.889027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" event={"ID":"b6966575-116d-4708-a4dc-4aa061e9b665","Type":"ContainerStarted","Data":"c95298aa32dff55085c268e09473fa203557d4061a98665c0d5f5350d00e2c78"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.898608 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" event={"ID":"dfdea3f4-194e-49e8-88f2-4170d677cee9","Type":"ContainerStarted","Data":"bbed59cb710c924f689c7cd44a2ed96de509014b30097f09908fb6565fb7db5d"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.912139 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lddxp" event={"ID":"f740e915-a41a-4bfb-a4fa-1b33903fecd6","Type":"ContainerStarted","Data":"bf7e1887294a5cbadf9f5d74063950e38923dd4818cdea796e30c786a92f1ae2"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.927862 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" event={"ID":"7c167c9f-5de9-44d9-8367-56975116a496","Type":"ContainerStarted","Data":"1d42bb98398242f8d06a5fd336af20264039303c2bc97268052eb947fd20792b"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.927922 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" event={"ID":"7c167c9f-5de9-44d9-8367-56975116a496","Type":"ContainerStarted","Data":"01fad0290f629fd9d947b990824d2225a5d3aa0ef51196cd0b840b9814b52f94"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.928732 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.956350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" event={"ID":"9ec3a749-a236-448b-97f1-4e92cd1ade7f","Type":"ContainerStarted","Data":"f8af06859d62a08e636d5904577231fa70bf7a78e4d0b03996581b96b6785538"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.965549 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" podStartSLOduration=132.965528255 podStartE2EDuration="2m12.965528255s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:50.878349127 +0000 UTC m=+153.279737997" watchObservedRunningTime="2025-12-06 06:59:50.965528255 +0000 UTC m=+153.366917125" Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.970733 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hpkds" event={"ID":"5fd79927-1699-47f7-932e-30961a406e41","Type":"ContainerStarted","Data":"bed23685eeb973cac26999653fffed0cbad1c1b2de270287df20c5ab4f7bdfd2"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.980134 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:50 crc kubenswrapper[4895]: E1206 06:59:50.981405 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.481375882 +0000 UTC m=+153.882764782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.992084 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" event={"ID":"688a9d65-4700-47fb-a150-723f9c21b054","Type":"ContainerStarted","Data":"d02e5283e94b86c942b8d0517d11a5b965c31a62c9ffcda616d1c90795539cf5"} Dec 06 06:59:50 crc kubenswrapper[4895]: I1206 06:59:50.993304 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.002343 4895 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ggcv2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.002433 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" podUID="688a9d65-4700-47fb-a150-723f9c21b054" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.008871 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"eb054c1e2a8513db9ec3b494ee8169bcb6b3d62860190afbca7e3939f40ef5d9"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.017143 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" podStartSLOduration=133.01711618 podStartE2EDuration="2m13.01711618s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:50.968121475 +0000 UTC m=+153.369510365" watchObservedRunningTime="2025-12-06 06:59:51.01711618 +0000 UTC m=+153.418505050" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.039889 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8tlz8" event={"ID":"27c5c363-c056-4033-8f87-20e0221d9e04","Type":"ContainerStarted","Data":"56d10ce45e5a19366e72ea6eb22d467ec1babb7c8674b967fbea8227d788d05a"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.082948 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.085065 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.585042076 +0000 UTC m=+153.986431036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.113872 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" event={"ID":"95951ed2-c176-4e12-8dc3-58c9a19e9406","Type":"ContainerStarted","Data":"1e69a11dd35e2541e1cb8210f75c6ee49b2a014d195b9da0edc0e73284c56e4a"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.143133 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" event={"ID":"c64e03bd-fcd5-4b0d-84b9-841278f6560d","Type":"ContainerStarted","Data":"451d15f4142b1608565cd1a111df760b42485cc373c7aa858515d5f7dcc80665"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.227996 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.228678 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.728653038 +0000 UTC m=+154.130041908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.233264 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" event={"ID":"6ebeda1e-dfdd-42d2-8359-32902e14273b","Type":"ContainerStarted","Data":"278e0d2edc111bed8db4bc2b9629c8889047ea5958659ee4d884d38ebe5f5136"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.237532 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" event={"ID":"6ebeda1e-dfdd-42d2-8359-32902e14273b","Type":"ContainerStarted","Data":"20e55c54840047c03c87991149456a7aa5db0e6f0a7eeac85e471e6f9785e859"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.235147 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:51 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:51 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:51 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.237914 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.252248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" event={"ID":"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e","Type":"ContainerStarted","Data":"d24fbe2276145fc36f75b7a1d0bd75602a41704aa4950f1c2b53454cab9630ca"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.252294 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" event={"ID":"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e","Type":"ContainerStarted","Data":"24feea3a2002821ba5e0c29adc7e55bce67529d9531484bf89dfec60921d3874"} Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.332199 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.332607 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.832591901 +0000 UTC m=+154.233980771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.452880 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.453218 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.953189445 +0000 UTC m=+154.354578315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.454344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.471146 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:51.971122446 +0000 UTC m=+154.372511316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.552185 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.555377 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.555799 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.055780366 +0000 UTC m=+154.457169236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.684717 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.685128 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.18511635 +0000 UTC m=+154.586505220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.869943 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" podStartSLOduration=134.869922627 podStartE2EDuration="2m14.869922627s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:51.066785875 +0000 UTC m=+153.468174745" watchObservedRunningTime="2025-12-06 06:59:51.869922627 +0000 UTC m=+154.271311497" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.871358 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.877262 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.879708 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.879865 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.982633 4895 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-994mt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.982723 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" podUID="7c167c9f-5de9-44d9-8367-56975116a496" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.983192 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.983388 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9579a8c8-23da-4499-b78d-96835a68b578-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:51 crc kubenswrapper[4895]: I1206 06:59:51.983447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9579a8c8-23da-4499-b78d-96835a68b578-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:51 crc kubenswrapper[4895]: E1206 06:59:51.983587 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.483565068 +0000 UTC m=+154.884953938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.013991 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.085235 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9579a8c8-23da-4499-b78d-96835a68b578-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.085327 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.085378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9579a8c8-23da-4499-b78d-96835a68b578-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.085572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9579a8c8-23da-4499-b78d-96835a68b578-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.085983 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.585962773 +0000 UTC m=+154.987351653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.188518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.189203 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.689183864 +0000 UTC m=+155.090572734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.257129 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9579a8c8-23da-4499-b78d-96835a68b578-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.257584 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:52 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:52 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:52 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.257643 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.281523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.309668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.310317 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.810303364 +0000 UTC m=+155.211692234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.413905 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.414375 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:52.91433307 +0000 UTC m=+155.315721940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.463435 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lddxp" event={"ID":"f740e915-a41a-4bfb-a4fa-1b33903fecd6","Type":"ContainerStarted","Data":"bce2d9c18fc42d732011d498f1549896dcc9b1927b463271fe317f5a2b202e50"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.466484 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.473704 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.473790 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.514367 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-lddxp" podStartSLOduration=134.514339962 podStartE2EDuration="2m14.514339962s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:52.514131796 +0000 UTC m=+154.915520666" watchObservedRunningTime="2025-12-06 06:59:52.514339962 +0000 UTC m=+154.915728832" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.514542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" event={"ID":"389b4065-1992-4508-ae57-601dbfec42b6","Type":"ContainerStarted","Data":"72189ae38af8b4acee6d28f5deab13b0cd562124d4adb81b5dfdb05521883cab"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.516531 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.517065 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.017045895 +0000 UTC m=+155.418434765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.527194 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" event={"ID":"b6966575-116d-4708-a4dc-4aa061e9b665","Type":"ContainerStarted","Data":"d612d66052bf8f4254348e43495e5f5029c9c24d8d0ee77d2d7dec8a0e9ca0dd"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.626965 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.628626 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.128606313 +0000 UTC m=+155.529995183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.727442 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qv9gv" podStartSLOduration=134.727420358 podStartE2EDuration="2m14.727420358s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:52.726395506 +0000 UTC m=+155.127784376" watchObservedRunningTime="2025-12-06 06:59:52.727420358 +0000 UTC m=+155.128809228" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.729266 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.729922 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.229895714 +0000 UTC m=+155.631284584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.761074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" event={"ID":"c64e03bd-fcd5-4b0d-84b9-841278f6560d","Type":"ContainerStarted","Data":"64af96c13c11bfe73581cd08376430f52c8fceea3470cd7e65ad10b6419323a0"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.762551 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.825050 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-qnrd2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.825139 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" podUID="c64e03bd-fcd5-4b0d-84b9-841278f6560d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.840963 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.842665 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.342638317 +0000 UTC m=+155.744027187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.845342 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" event={"ID":"688a9d65-4700-47fb-a150-723f9c21b054","Type":"ContainerStarted","Data":"b7a78b341c59e943f393b1823b2aecb69f477d7be1f1a4d1dff9a1c8b191f267"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.848432 4895 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ggcv2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.848561 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" podUID="688a9d65-4700-47fb-a150-723f9c21b054" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.857511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"df984e74e73fd691303c5a2ba8ce94caf68e519903295133f524cc6656ce9c53"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.878885 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d9d243c4e0dfadc2b75ffbf84eaabd53703206152ab3cb21df0b6dc5d620f5e2"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.881882 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zwn6t"] Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.883224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8tlz8" event={"ID":"27c5c363-c056-4033-8f87-20e0221d9e04","Type":"ContainerStarted","Data":"78251704471cd4e50eb4fec63523e42f5916fd4191cba12292bb69064e637133"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.883358 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.883614 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" event={"ID":"486a83f8-907b-441c-aae7-428a6e22d689","Type":"ContainerStarted","Data":"9b877805a600811bf23cef1bb1ede856fd66cba107ff44d75c5d5a2775b8bea8"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.884134 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.892069 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mvldw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.892167 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" podUID="486a83f8-907b-441c-aae7-428a6e22d689" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.922048 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.942320 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-catalog-content\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.942378 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-utilities\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.942653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8552\" (UniqueName: \"kubernetes.io/projected/c6ec729e-b8e3-42ad-84a0-ded336274afd-kube-api-access-n8552\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.942844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:52 crc kubenswrapper[4895]: E1206 06:59:52.943282 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.443266468 +0000 UTC m=+155.844655338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.948947 4895 generic.go:334] "Generic (PLEG): container finished" podID="9ec3a749-a236-448b-97f1-4e92cd1ade7f" containerID="daff6ae57da647efd2a8da5b29041526d1a738e1c3d19bfd1f8002680e43c18e" exitCode=0 Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.951540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" event={"ID":"9ec3a749-a236-448b-97f1-4e92cd1ade7f","Type":"ContainerDied","Data":"daff6ae57da647efd2a8da5b29041526d1a738e1c3d19bfd1f8002680e43c18e"} Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.977399 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" podStartSLOduration=135.977369966 podStartE2EDuration="2m15.977369966s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:52.922105418 +0000 UTC m=+155.323494288" watchObservedRunningTime="2025-12-06 06:59:52.977369966 +0000 UTC m=+155.378758836" Dec 06 06:59:52 crc kubenswrapper[4895]: I1206 06:59:52.978460 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zwn6t"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.045461 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.045799 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8552\" (UniqueName: \"kubernetes.io/projected/c6ec729e-b8e3-42ad-84a0-ded336274afd-kube-api-access-n8552\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.045964 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-catalog-content\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.046000 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-utilities\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.046419 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-utilities\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.046517 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.546496669 +0000 UTC m=+155.947885529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.052905 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hz8kp" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.056195 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-catalog-content\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.087609 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2j9kw"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.088964 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.092225 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j9kw"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.098284 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.153447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-catalog-content\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.153552 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.153601 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-utilities\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.153700 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9sf\" (UniqueName: \"kubernetes.io/projected/ad3d9f3b-cc45-4169-9476-b15937334205-kube-api-access-rs9sf\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.154137 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.654119025 +0000 UTC m=+156.055507895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.165276 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8gpr"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.168947 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.180538 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8552\" (UniqueName: \"kubernetes.io/projected/c6ec729e-b8e3-42ad-84a0-ded336274afd-kube-api-access-n8552\") pod \"certified-operators-zwn6t\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.186729 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" podStartSLOduration=135.186697436 podStartE2EDuration="2m15.186697436s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:53.170922282 +0000 UTC m=+155.572311152" watchObservedRunningTime="2025-12-06 06:59:53.186697436 +0000 UTC m=+155.588086306" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.200080 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8gpr"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.228984 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8tlz8" podStartSLOduration=11.228955133 podStartE2EDuration="11.228955133s" podCreationTimestamp="2025-12-06 06:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:53.228809969 +0000 UTC m=+155.630198839" watchObservedRunningTime="2025-12-06 06:59:53.228955133 +0000 UTC m=+155.630344003" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.240261 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:53 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:53 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:53 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.240339 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.260048 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.260290 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.760247826 +0000 UTC m=+156.161636696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.260688 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwq76\" (UniqueName: \"kubernetes.io/projected/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-kube-api-access-xwq76\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.260816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-utilities\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.260937 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-utilities\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.261079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs9sf\" (UniqueName: \"kubernetes.io/projected/ad3d9f3b-cc45-4169-9476-b15937334205-kube-api-access-rs9sf\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.261223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-catalog-content\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.261300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-catalog-content\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.261378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.261555 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-utilities\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.261801 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.761782073 +0000 UTC m=+156.163170943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.262173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-catalog-content\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.286917 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs9sf\" (UniqueName: \"kubernetes.io/projected/ad3d9f3b-cc45-4169-9476-b15937334205-kube-api-access-rs9sf\") pod \"community-operators-2j9kw\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.341466 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.349757 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlp2v"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.350950 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.374101 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.374669 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.874639729 +0000 UTC m=+156.276028599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.374815 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-utilities\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.374903 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-catalog-content\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.374936 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.374956 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwq76\" (UniqueName: \"kubernetes.io/projected/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-kube-api-access-xwq76\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.375731 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-utilities\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.375866 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-catalog-content\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.376060 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.876045032 +0000 UTC m=+156.277433952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.376270 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlp2v"] Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.444460 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwq76\" (UniqueName: \"kubernetes.io/projected/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-kube-api-access-xwq76\") pod \"certified-operators-z8gpr\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.457283 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.476607 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.476808 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvpq\" (UniqueName: \"kubernetes.io/projected/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-kube-api-access-txvpq\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.476897 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-utilities\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.476924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-catalog-content\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.477034 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:53.977014985 +0000 UTC m=+156.378403855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.583104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-catalog-content\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.583142 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-utilities\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.583264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvpq\" (UniqueName: \"kubernetes.io/projected/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-kube-api-access-txvpq\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.583330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.583861 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.083841656 +0000 UTC m=+156.485230526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.583990 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-utilities\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.584310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-catalog-content\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.661239 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.696608 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.697083 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.197060634 +0000 UTC m=+156.598449504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.754490 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-994mt" Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.801202 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.804884 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.304861076 +0000 UTC m=+156.706249956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.902814 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:53 crc kubenswrapper[4895]: E1206 06:59:53.903210 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.403188836 +0000 UTC m=+156.804577706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:53 crc kubenswrapper[4895]: I1206 06:59:53.918642 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvpq\" (UniqueName: \"kubernetes.io/projected/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-kube-api-access-txvpq\") pod \"community-operators-dlp2v\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.004508 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.004902 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.50488464 +0000 UTC m=+156.906273510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.034515 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.106603 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.107430 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.607400019 +0000 UTC m=+157.008788889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.212787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.213618 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.713599392 +0000 UTC m=+157.114988262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.284282 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.284927 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:54 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:54 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:54 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.285020 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.311339 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" podStartSLOduration=136.311318063 podStartE2EDuration="2m16.311318063s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:54.310278831 +0000 UTC m=+156.711667711" watchObservedRunningTime="2025-12-06 06:59:54.311318063 +0000 UTC m=+156.712706943" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.317607 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.317969 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.817950597 +0000 UTC m=+157.219339457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.420457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.423943 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:54.923916672 +0000 UTC m=+157.325305572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.521705 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.522185 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.022157869 +0000 UTC m=+157.423546739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.623445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.624021 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.124004639 +0000 UTC m=+157.525393509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.725426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.726038 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.226012212 +0000 UTC m=+157.627401082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.827819 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.828313 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.328298484 +0000 UTC m=+157.729687354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.910831 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" event={"ID":"5c27a6f4-148d-4dd5-b746-4cb8dfb1f66e","Type":"ContainerStarted","Data":"f788154fe5eb42bab7bc2b77b087ed8606e9ed9dd08f98dba02de1458f9b81e2"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.911952 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.913493 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" event={"ID":"389b4065-1992-4508-ae57-601dbfec42b6","Type":"ContainerStarted","Data":"f211a5594341661ffe6500888f86ca150ee25eac98f11ce04a43adab841900ed"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.914886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sqdms" event={"ID":"6463f080-fa44-4fd8-b559-11f5056ffd0a","Type":"ContainerStarted","Data":"1c2107f247fcfe4cf7b30d5b5f18399ec4aa4b431c0900fd8813cdedbcd7c305"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.915739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" event={"ID":"dfdea3f4-194e-49e8-88f2-4170d677cee9","Type":"ContainerStarted","Data":"6f86f61f2bd4a661bcc272f0c6b3334b65a916afa76a47e573a63dd2f77841a6"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.917295 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" event={"ID":"210b89a8-666c-4aab-a64d-e37987eed3f0","Type":"ContainerStarted","Data":"884158ab5b5746fcb0f1e81e78c11a0cb5559b202156e6bceb412111dff56ae1"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.918613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" event={"ID":"6ebeda1e-dfdd-42d2-8359-32902e14273b","Type":"ContainerStarted","Data":"83ac68784b1e4bd763d1c697c58688a60b781d2e3f82a95d3c7d9e2252e1ebc2"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.920285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" event={"ID":"9ec3a749-a236-448b-97f1-4e92cd1ade7f","Type":"ContainerStarted","Data":"c3ace1352f48494e73dfb175da0d847cea1088164b7e6396df93f6f48fd4df00"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.920665 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.923741 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" event={"ID":"89296d2e-cc06-49b2-98d3-61fbf8ac3a77","Type":"ContainerStarted","Data":"988af2f8c1ed708e789b774884e25d189d3e841d081c87e8e3a9d01e82318959"} Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.928311 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:54 crc kubenswrapper[4895]: E1206 06:59:54.928723 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.428706048 +0000 UTC m=+157.830094918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.934555 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.934614 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.934631 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-52v65" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.936483 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-qnrd2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.936569 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" podUID="c64e03bd-fcd5-4b0d-84b9-841278f6560d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.937636 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.938955 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.939014 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.941111 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.956884 4895 patch_prober.go:28] interesting pod/console-f9d7485db-52v65 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 06 06:59:54 crc kubenswrapper[4895]: I1206 06:59:54.956946 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-52v65" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.015952 4895 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gjt8r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]log ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]etcd ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/generic-apiserver-start-informers ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/max-in-flight-filter ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 06 06:59:55 crc kubenswrapper[4895]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 06 06:59:55 crc kubenswrapper[4895]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/project.openshift.io-projectcache ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-startinformers ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 06 06:59:55 crc kubenswrapper[4895]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 06 06:59:55 crc kubenswrapper[4895]: livez check failed Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.016029 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" podUID="ee4a193d-89fb-4c16-9aed-3c5868b417c3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.032622 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.040070 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.540052669 +0000 UTC m=+157.941441539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.078072 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" podStartSLOduration=138.078051806 podStartE2EDuration="2m18.078051806s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.076076916 +0000 UTC m=+157.477465786" watchObservedRunningTime="2025-12-06 06:59:55.078051806 +0000 UTC m=+157.479440676" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.079819 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" podStartSLOduration=137.07980919 podStartE2EDuration="2m17.07980919s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.015623598 +0000 UTC m=+157.417012468" watchObservedRunningTime="2025-12-06 06:59:55.07980919 +0000 UTC m=+157.481198060" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.111652 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.111752 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.112317 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.112343 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.122700 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.122750 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.122824 4895 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-r7x6r container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.122860 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" podUID="210b89a8-666c-4aab-a64d-e37987eed3f0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.145596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.148827 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.648789529 +0000 UTC m=+158.050178399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.150650 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" podStartSLOduration=137.150613415 podStartE2EDuration="2m17.150613415s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.120424937 +0000 UTC m=+157.521813807" watchObservedRunningTime="2025-12-06 06:59:55.150613415 +0000 UTC m=+157.552002285" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.190594 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xmdtv"] Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.192537 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.195104 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.215217 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.219295 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmdtv"] Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.232873 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jt7gr" podStartSLOduration=137.232835131 podStartE2EDuration="2m17.232835131s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.216606262 +0000 UTC m=+157.617995132" watchObservedRunningTime="2025-12-06 06:59:55.232835131 +0000 UTC m=+157.634224001" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.251102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-catalog-content\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.251171 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.251203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zcb8\" (UniqueName: \"kubernetes.io/projected/c57d6deb-02dd-473c-b644-7ef7a1f8e500-kube-api-access-4zcb8\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.251257 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-utilities\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.251687 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.751667819 +0000 UTC m=+158.153056689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.287867 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:55 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:55 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:55 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.287955 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.352397 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.353445 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.353859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-utilities\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.353944 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-catalog-content\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.353967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zcb8\" (UniqueName: \"kubernetes.io/projected/c57d6deb-02dd-473c-b644-7ef7a1f8e500-kube-api-access-4zcb8\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.354325 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.854308302 +0000 UTC m=+158.255697172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.408023 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-utilities\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.408309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-catalog-content\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.455995 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.456589 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:55.956571783 +0000 UTC m=+158.357960653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.466351 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-qnrd2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.466443 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" podUID="c64e03bd-fcd5-4b0d-84b9-841278f6560d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.466587 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-qnrd2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.466613 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" podUID="c64e03bd-fcd5-4b0d-84b9-841278f6560d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.470230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zcb8\" (UniqueName: \"kubernetes.io/projected/c57d6deb-02dd-473c-b644-7ef7a1f8e500-kube-api-access-4zcb8\") pod \"redhat-marketplace-xmdtv\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.510060 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bwphg" podStartSLOduration=137.510040027 podStartE2EDuration="2m17.510040027s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.507847019 +0000 UTC m=+157.909235879" watchObservedRunningTime="2025-12-06 06:59:55.510040027 +0000 UTC m=+157.911428897" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.510465 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4pfdd" podStartSLOduration=137.510459529 podStartE2EDuration="2m17.510459529s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.31354168 +0000 UTC m=+157.714930550" watchObservedRunningTime="2025-12-06 06:59:55.510459529 +0000 UTC m=+157.911848389" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.565560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.566091 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.066064498 +0000 UTC m=+158.467453368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.579970 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.673037 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.673984 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.173965582 +0000 UTC m=+158.575354452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.722311 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-stqzm"] Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.756345 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.760740 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-g8svq" podStartSLOduration=137.760706087 podStartE2EDuration="2m17.760706087s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:55.755733884 +0000 UTC m=+158.157122754" watchObservedRunningTime="2025-12-06 06:59:55.760706087 +0000 UTC m=+158.162094967" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.774771 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.781879 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8gg4\" (UniqueName: \"kubernetes.io/projected/e559f522-c700-4e90-82c5-6f2643185c9a-kube-api-access-k8gg4\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.781978 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-catalog-content\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.782006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-utilities\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.782361 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.282340401 +0000 UTC m=+158.683729271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.824380 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqzm"] Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.889557 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8gg4\" (UniqueName: \"kubernetes.io/projected/e559f522-c700-4e90-82c5-6f2643185c9a-kube-api-access-k8gg4\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.889629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-catalog-content\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.889694 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-utilities\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.889756 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.890163 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.390147813 +0000 UTC m=+158.791536683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.892125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-catalog-content\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.892360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-utilities\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.970748 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8gg4\" (UniqueName: \"kubernetes.io/projected/e559f522-c700-4e90-82c5-6f2643185c9a-kube-api-access-k8gg4\") pod \"redhat-marketplace-stqzm\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.972997 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ddnd"] Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.979365 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.989294 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 06:59:55 crc kubenswrapper[4895]: I1206 06:59:55.990822 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:55 crc kubenswrapper[4895]: E1206 06:59:55.991427 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.491406093 +0000 UTC m=+158.892794963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.018497 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-frjjl" event={"ID":"7a2dec9b-f30f-4d6a-a6ae-7a82bd9452dc","Type":"ContainerStarted","Data":"b853e3c76a7d466fed90df3720f80de16bffc64aa4eee2f5f531e2a849afbecf"} Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.022952 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ddnd"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.092926 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.094033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.094227 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-utilities\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.094319 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-catalog-content\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.094528 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.59451021 +0000 UTC m=+158.995899080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.094677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55kb\" (UniqueName: \"kubernetes.io/projected/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-kube-api-access-f55kb\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.099404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sqdms" event={"ID":"6463f080-fa44-4fd8-b559-11f5056ffd0a","Type":"ContainerStarted","Data":"d8bc4be61111a4a2455c751ec33c04669c703ed3d21aed5c76d91490330e27e2"} Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.099528 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sqdms" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.108938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" event={"ID":"95951ed2-c176-4e12-8dc3-58c9a19e9406","Type":"ContainerStarted","Data":"f47b5c8dae8e68995c2f62694b668ca07596c1c1e08e3f9e53a58db3d039b070"} Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.138575 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sqdms" podStartSLOduration=15.138541252 podStartE2EDuration="15.138541252s" podCreationTimestamp="2025-12-06 06:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:56.130043641 +0000 UTC m=+158.531432511" watchObservedRunningTime="2025-12-06 06:59:56.138541252 +0000 UTC m=+158.539930122" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.143492 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9579a8c8-23da-4499-b78d-96835a68b578","Type":"ContainerStarted","Data":"a059a89ffd8a0510832674c7ce0b24857958fc59a41d38a69330e241a862bd40"} Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.144434 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.144521 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.174436 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j9kw"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.186779 4895 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-b9khb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.186845 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" podUID="9ec3a749-a236-448b-97f1-4e92cd1ade7f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.218980 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.219338 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55kb\" (UniqueName: \"kubernetes.io/projected/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-kube-api-access-f55kb\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.219873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-utilities\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.220007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-catalog-content\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.221512 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.72145918 +0000 UTC m=+159.122848050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.236953 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-utilities\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.242713 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-catalog-content\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.276109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55kb\" (UniqueName: \"kubernetes.io/projected/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-kube-api-access-f55kb\") pod \"redhat-operators-9ddnd\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.283298 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6wtxh"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.284805 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.300418 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:56 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:56 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:56 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.300799 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.319045 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6wtxh"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.368104 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-catalog-content\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.368460 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-utilities\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.368610 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.368748 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plq9m\" (UniqueName: \"kubernetes.io/projected/34e74e8b-145a-45bc-a163-4fbd502bc155-kube-api-access-plq9m\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.374836 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8gpr"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.378950 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.386996 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.886966314 +0000 UTC m=+159.288355194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.446557 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlp2v"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.463034 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zwn6t"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.473512 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.473764 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-catalog-content\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.473792 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-utilities\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.473826 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plq9m\" (UniqueName: \"kubernetes.io/projected/34e74e8b-145a-45bc-a163-4fbd502bc155-kube-api-access-plq9m\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.474460 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:56.97440685 +0000 UTC m=+159.375795720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.478078 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-catalog-content\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.478409 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-utilities\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.518703 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plq9m\" (UniqueName: \"kubernetes.io/projected/34e74e8b-145a-45bc-a163-4fbd502bc155-kube-api-access-plq9m\") pod \"redhat-operators-6wtxh\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.575744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.576274 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.076260059 +0000 UTC m=+159.477648919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.668061 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmdtv"] Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.678174 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.678457 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.178425637 +0000 UTC m=+159.579814507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.678547 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.678953 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.178937323 +0000 UTC m=+159.580326193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.684746 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.779943 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.780199 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.280171993 +0000 UTC m=+159.681560853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.780718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.781274 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.281258946 +0000 UTC m=+159.682647816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.894062 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:56 crc kubenswrapper[4895]: E1206 06:59:56.894500 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.394461133 +0000 UTC m=+159.795850003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.947871 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqzm"] Dec 06 06:59:56 crc kubenswrapper[4895]: W1206 06:59:56.958661 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode559f522_c700_4e90_82c5_6f2643185c9a.slice/crio-8acdcffe42af80ddd71e4c8c8906a3acb46ac151ef09294613ed16cd17258f7a WatchSource:0}: Error finding container 8acdcffe42af80ddd71e4c8c8906a3acb46ac151ef09294613ed16cd17258f7a: Status 404 returned error can't find the container with id 8acdcffe42af80ddd71e4c8c8906a3acb46ac151ef09294613ed16cd17258f7a Dec 06 06:59:56 crc kubenswrapper[4895]: I1206 06:59:56.997644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.001007 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.500978775 +0000 UTC m=+159.902367645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.100559 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.100992 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.600956627 +0000 UTC m=+160.002345507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.147902 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-qnrd2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.147954 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" podUID="c64e03bd-fcd5-4b0d-84b9-841278f6560d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.163523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerStarted","Data":"14584348c65f947e99b66213b590032aec5cfbd4ea72d934d83d5aa894a60aab"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.164866 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerStarted","Data":"8acdcffe42af80ddd71e4c8c8906a3acb46ac151ef09294613ed16cd17258f7a"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.166513 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ddnd"] Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.168109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerStarted","Data":"29a7d5c58dcbe5f4b597ed541283960df2bedb16d565eeadfb02c697153cf36f"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.173622 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerStarted","Data":"78e944e8e944db6e735cb6a69ae7cd5a3f89004c1bf7e0264841c11ee4a0c105"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.175563 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9579a8c8-23da-4499-b78d-96835a68b578","Type":"ContainerStarted","Data":"6d0a7b68ecab4f672dcf43a8deaa140ea3ce84abcf582aceddd6cb06344553f3"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.176603 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerStarted","Data":"a4d0e2bdef3aea6817f817b37e44e66ff8ca88edb97fa38bd2e6669663544143"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.177766 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerStarted","Data":"52a99497bb45f0a6124269222f006fa569e8e2d7aff3421baca4d1e950de1c43"} Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.202765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.203221 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.703204088 +0000 UTC m=+160.104592958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.214768 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=6.214741552 podStartE2EDuration="6.214741552s" podCreationTimestamp="2025-12-06 06:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:59:57.208907283 +0000 UTC m=+159.610296153" watchObservedRunningTime="2025-12-06 06:59:57.214741552 +0000 UTC m=+159.616130422" Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.218380 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6wtxh"] Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.233219 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:57 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:57 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:57 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.233829 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.303806 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.303939 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.803917272 +0000 UTC m=+160.205306142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.305705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.306366 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.806335035 +0000 UTC m=+160.207723975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: W1206 06:59:57.332443 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e74e8b_145a_45bc_a163_4fbd502bc155.slice/crio-bec812505272707747c69bc654415427b1c96b5bdc3bc24be4d8ee8689cf8768 WatchSource:0}: Error finding container bec812505272707747c69bc654415427b1c96b5bdc3bc24be4d8ee8689cf8768: Status 404 returned error can't find the container with id bec812505272707747c69bc654415427b1c96b5bdc3bc24be4d8ee8689cf8768 Dec 06 06:59:57 crc kubenswrapper[4895]: W1206 06:59:57.335756 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7424c4ea_3bfe_4af7_afc3_e2ed98d36e75.slice/crio-d52683215c65cb717282c1e1d421f2ff36cf526709a83062b5c370659941a86b WatchSource:0}: Error finding container d52683215c65cb717282c1e1d421f2ff36cf526709a83062b5c370659941a86b: Status 404 returned error can't find the container with id d52683215c65cb717282c1e1d421f2ff36cf526709a83062b5c370659941a86b Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.407180 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.407729 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:57.90770698 +0000 UTC m=+160.309095850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.428276 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9khb" Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.509606 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.510061 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.010047814 +0000 UTC m=+160.411436684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.626584 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.627223 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.127203663 +0000 UTC m=+160.528592533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.770569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.771020 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.270994229 +0000 UTC m=+160.672383099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.858593 4895 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.871625 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.871896 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.371856778 +0000 UTC m=+160.773245658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.872603 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.873179 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.373160888 +0000 UTC m=+160.774549758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:57 crc kubenswrapper[4895]: I1206 06:59:57.973623 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:57 crc kubenswrapper[4895]: E1206 06:59:57.974385 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.474345347 +0000 UTC m=+160.875734217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.075226 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.075894 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.575876955 +0000 UTC m=+160.977265815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.148281 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.152015 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.155746 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.157978 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.177656 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.177884 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.677556099 +0000 UTC m=+161.078944969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.183050 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8203165-b01f-4f0d-907b-898ad878f5de-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.185992 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8203165-b01f-4f0d-907b-898ad878f5de-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.189607 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.190458 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.690413854 +0000 UTC m=+161.091802724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.191579 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.203197 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerStarted","Data":"d52683215c65cb717282c1e1d421f2ff36cf526709a83062b5c370659941a86b"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.212410 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad3d9f3b-cc45-4169-9476-b15937334205" containerID="6364b9b146e8f8b504e14777abe398f4207fde607a976da5bb244b1b1f60ef9b" exitCode=0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.212502 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerDied","Data":"6364b9b146e8f8b504e14777abe398f4207fde607a976da5bb244b1b1f60ef9b"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.214043 4895 generic.go:334] "Generic (PLEG): container finished" podID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerID="d85f4525280512d8721506853b36c79e6d2d55e6ce09141df359be413123c443" exitCode=0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.214183 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerDied","Data":"d85f4525280512d8721506853b36c79e6d2d55e6ce09141df359be413123c443"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.215739 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.219808 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerStarted","Data":"bec812505272707747c69bc654415427b1c96b5bdc3bc24be4d8ee8689cf8768"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.223258 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerID="d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752" exitCode=0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.223389 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerDied","Data":"d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.232118 4895 generic.go:334] "Generic (PLEG): container finished" podID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerID="628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a" exitCode=0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.232214 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerDied","Data":"628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.240101 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:58 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:58 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:58 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.240161 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.247151 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" event={"ID":"95951ed2-c176-4e12-8dc3-58c9a19e9406","Type":"ContainerStarted","Data":"496871284f703007bc6e76ed67055301ec1c46c097a9adeea7abbf9cda3d2067"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.250191 4895 generic.go:334] "Generic (PLEG): container finished" podID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerID="2d07d427e3c9fcac5716f08aefab874d4138ff27963b69a4c9e3554a9c9d11c7" exitCode=0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.250279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerDied","Data":"2d07d427e3c9fcac5716f08aefab874d4138ff27963b69a4c9e3554a9c9d11c7"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.253845 4895 generic.go:334] "Generic (PLEG): container finished" podID="9579a8c8-23da-4499-b78d-96835a68b578" containerID="6d0a7b68ecab4f672dcf43a8deaa140ea3ce84abcf582aceddd6cb06344553f3" exitCode=0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.255030 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9579a8c8-23da-4499-b78d-96835a68b578","Type":"ContainerDied","Data":"6d0a7b68ecab4f672dcf43a8deaa140ea3ce84abcf582aceddd6cb06344553f3"} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.280903 4895 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-06T06:59:57.858637282Z","Handler":null,"Name":""} Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.291161 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.291321 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.791296193 +0000 UTC m=+161.192685063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.291606 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.291770 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8203165-b01f-4f0d-907b-898ad878f5de-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.291871 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8203165-b01f-4f0d-907b-898ad878f5de-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.292021 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.792013614 +0000 UTC m=+161.193402484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.292193 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8203165-b01f-4f0d-907b-898ad878f5de-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.322096 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8203165-b01f-4f0d-907b-898ad878f5de-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.393908 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.394095 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.894061649 +0000 UTC m=+161.295450529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.394288 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.394745 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.89473235 +0000 UTC m=+161.296121220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.496205 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.496432 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.996390892 +0000 UTC m=+161.397779762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.496662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.497113 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:58.997092714 +0000 UTC m=+161.398481584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.502214 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.599868 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.600918 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:59:59.100814701 +0000 UTC m=+161.502203571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.601423 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: E1206 06:59:58.601856 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:59:59.101835992 +0000 UTC m=+161.503224862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s5xjc" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.676016 4895 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.676097 4895 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.703507 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.709226 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.806495 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.818112 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 06:59:58 crc kubenswrapper[4895]: I1206 06:59:58.818166 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.023164 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 06:59:59 crc kubenswrapper[4895]: W1206 06:59:59.039422 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode8203165_b01f_4f0d_907b_898ad878f5de.slice/crio-0ebb119588f6f8509921493464bec6083583ed19bd9965837e08af6b20db0c8c WatchSource:0}: Error finding container 0ebb119588f6f8509921493464bec6083583ed19bd9965837e08af6b20db0c8c: Status 404 returned error can't find the container with id 0ebb119588f6f8509921493464bec6083583ed19bd9965837e08af6b20db0c8c Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.056210 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s5xjc\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.170660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.234071 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:59:59 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 06:59:59 crc kubenswrapper[4895]: [+]process-running ok Dec 06 06:59:59 crc kubenswrapper[4895]: healthz check failed Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.235036 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.266212 4895 generic.go:334] "Generic (PLEG): container finished" podID="e559f522-c700-4e90-82c5-6f2643185c9a" containerID="476c3c519b3826255c1b289373e089e492574c06d3d0e454d537ebef13baac8c" exitCode=0 Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.266320 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerDied","Data":"476c3c519b3826255c1b289373e089e492574c06d3d0e454d537ebef13baac8c"} Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.269702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" event={"ID":"95951ed2-c176-4e12-8dc3-58c9a19e9406","Type":"ContainerStarted","Data":"6f45aaa6e59fa1b42ba289785c14e0550a029dcba3d767db2bda327815ed9bc0"} Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.272857 4895 generic.go:334] "Generic (PLEG): container finished" podID="9f9c91df-cb75-43ed-9677-9f2409edba07" containerID="2c9ca7857cdec08dcb4421458f568c475f4bf791fd65f964c728bc638a73b2d7" exitCode=0 Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.272942 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" event={"ID":"9f9c91df-cb75-43ed-9677-9f2409edba07","Type":"ContainerDied","Data":"2c9ca7857cdec08dcb4421458f568c475f4bf791fd65f964c728bc638a73b2d7"} Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.279279 4895 generic.go:334] "Generic (PLEG): container finished" podID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerID="e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831" exitCode=0 Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.279503 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerDied","Data":"e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831"} Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.296200 4895 generic.go:334] "Generic (PLEG): container finished" podID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerID="44e47111f8d2d809d4054ec2df48af831ad0fddea70be4156115cc27abb61945" exitCode=0 Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.296277 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerDied","Data":"44e47111f8d2d809d4054ec2df48af831ad0fddea70be4156115cc27abb61945"} Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.337692 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8203165-b01f-4f0d-907b-898ad878f5de","Type":"ContainerStarted","Data":"0ebb119588f6f8509921493464bec6083583ed19bd9965837e08af6b20db0c8c"} Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.674911 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s5xjc"] Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.695538 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.695608 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:59:59 crc kubenswrapper[4895]: W1206 06:59:59.709960 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc669ea24_666a_4152_9a1e_43f614bf8e21.slice/crio-66fe34e19b16bfcbebf8935f75fbd1f670417790d3eecd2dd25e44916c8ebc5b WatchSource:0}: Error finding container 66fe34e19b16bfcbebf8935f75fbd1f670417790d3eecd2dd25e44916c8ebc5b: Status 404 returned error can't find the container with id 66fe34e19b16bfcbebf8935f75fbd1f670417790d3eecd2dd25e44916c8ebc5b Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.763186 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.840222 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9579a8c8-23da-4499-b78d-96835a68b578-kube-api-access\") pod \"9579a8c8-23da-4499-b78d-96835a68b578\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.840302 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9579a8c8-23da-4499-b78d-96835a68b578-kubelet-dir\") pod \"9579a8c8-23da-4499-b78d-96835a68b578\" (UID: \"9579a8c8-23da-4499-b78d-96835a68b578\") " Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.840716 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9579a8c8-23da-4499-b78d-96835a68b578-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9579a8c8-23da-4499-b78d-96835a68b578" (UID: "9579a8c8-23da-4499-b78d-96835a68b578"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.849721 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9579a8c8-23da-4499-b78d-96835a68b578-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9579a8c8-23da-4499-b78d-96835a68b578" (UID: "9579a8c8-23da-4499-b78d-96835a68b578"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.864597 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.871283 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gjt8r" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.942864 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9579a8c8-23da-4499-b78d-96835a68b578-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:59 crc kubenswrapper[4895]: I1206 06:59:59.942910 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9579a8c8-23da-4499-b78d-96835a68b578-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.083303 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.135487 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp"] Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.136174 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.144554 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7x6r" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.266534 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 07:00:00 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 07:00:00 crc kubenswrapper[4895]: [+]process-running ok Dec 06 07:00:00 crc kubenswrapper[4895]: healthz check failed Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.267128 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.343091 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q"] Dec 06 07:00:00 crc kubenswrapper[4895]: E1206 07:00:00.343487 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9579a8c8-23da-4499-b78d-96835a68b578" containerName="pruner" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.343505 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9579a8c8-23da-4499-b78d-96835a68b578" containerName="pruner" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.343616 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9579a8c8-23da-4499-b78d-96835a68b578" containerName="pruner" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.344227 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.346082 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q"] Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.383937 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8203165-b01f-4f0d-907b-898ad878f5de","Type":"ContainerStarted","Data":"972d8b1f29680a2ef75191a9a6940be3f06d5e1f3378be9f73f44681e3d0989f"} Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.399841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" event={"ID":"c669ea24-666a-4152-9a1e-43f614bf8e21","Type":"ContainerStarted","Data":"c6c0960cb740b035234175a95c137d4bb7e81f5d9a5b6d05cb4ac874f92860a7"} Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.399906 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" event={"ID":"c669ea24-666a-4152-9a1e-43f614bf8e21","Type":"ContainerStarted","Data":"66fe34e19b16bfcbebf8935f75fbd1f670417790d3eecd2dd25e44916c8ebc5b"} Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.400712 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.412954 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.412928316 podStartE2EDuration="2.412928316s" podCreationTimestamp="2025-12-06 06:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:00:00.412828753 +0000 UTC m=+162.814217623" watchObservedRunningTime="2025-12-06 07:00:00.412928316 +0000 UTC m=+162.814317176" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.432005 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" event={"ID":"95951ed2-c176-4e12-8dc3-58c9a19e9406","Type":"ContainerStarted","Data":"284c3cb6ce6d444440ca8d4cd552a0d6417388c70d8fed761eca1f8218772f5c"} Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.461692 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" podStartSLOduration=142.461666924 podStartE2EDuration="2m22.461666924s" podCreationTimestamp="2025-12-06 06:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:00:00.460086115 +0000 UTC m=+162.861475005" watchObservedRunningTime="2025-12-06 07:00:00.461666924 +0000 UTC m=+162.863055794" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.478636 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.479768 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9579a8c8-23da-4499-b78d-96835a68b578","Type":"ContainerDied","Data":"a059a89ffd8a0510832674c7ce0b24857958fc59a41d38a69330e241a862bd40"} Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.479808 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a059a89ffd8a0510832674c7ce0b24857958fc59a41d38a69330e241a862bd40" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.507677 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kfwsb" podStartSLOduration=19.507652176 podStartE2EDuration="19.507652176s" podCreationTimestamp="2025-12-06 06:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:00:00.504493879 +0000 UTC m=+162.905882749" watchObservedRunningTime="2025-12-06 07:00:00.507652176 +0000 UTC m=+162.909041046" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.563523 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/651dca05-74b9-4632-8e1f-6ceb648d2b23-config-volume\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.563617 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm658\" (UniqueName: \"kubernetes.io/projected/651dca05-74b9-4632-8e1f-6ceb648d2b23-kube-api-access-zm658\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.563987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/651dca05-74b9-4632-8e1f-6ceb648d2b23-secret-volume\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.667942 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/651dca05-74b9-4632-8e1f-6ceb648d2b23-config-volume\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.668028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm658\" (UniqueName: \"kubernetes.io/projected/651dca05-74b9-4632-8e1f-6ceb648d2b23-kube-api-access-zm658\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.668430 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/651dca05-74b9-4632-8e1f-6ceb648d2b23-secret-volume\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.670262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/651dca05-74b9-4632-8e1f-6ceb648d2b23-config-volume\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.676272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/651dca05-74b9-4632-8e1f-6ceb648d2b23-secret-volume\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.688905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm658\" (UniqueName: \"kubernetes.io/projected/651dca05-74b9-4632-8e1f-6ceb648d2b23-kube-api-access-zm658\") pod \"collect-profiles-29416740-nsn6q\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.694738 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.773904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 07:00:00 crc kubenswrapper[4895]: I1206 07:00:00.787049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c72bd78-81d3-48dc-96c3-50bc6bac88d6-metrics-certs\") pod \"network-metrics-daemon-dzrsj\" (UID: \"2c72bd78-81d3-48dc-96c3-50bc6bac88d6\") " pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.084485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzrsj" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.234537 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 07:00:01 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 07:00:01 crc kubenswrapper[4895]: [+]process-running ok Dec 06 07:00:01 crc kubenswrapper[4895]: healthz check failed Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.235121 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.339297 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.418851 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f9c91df-cb75-43ed-9677-9f2409edba07-secret-volume\") pod \"9f9c91df-cb75-43ed-9677-9f2409edba07\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.444400 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9c91df-cb75-43ed-9677-9f2409edba07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f9c91df-cb75-43ed-9677-9f2409edba07" (UID: "9f9c91df-cb75-43ed-9677-9f2409edba07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.521106 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f9c91df-cb75-43ed-9677-9f2409edba07-config-volume\") pod \"9f9c91df-cb75-43ed-9677-9f2409edba07\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.521251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8zkh\" (UniqueName: \"kubernetes.io/projected/9f9c91df-cb75-43ed-9677-9f2409edba07-kube-api-access-j8zkh\") pod \"9f9c91df-cb75-43ed-9677-9f2409edba07\" (UID: \"9f9c91df-cb75-43ed-9677-9f2409edba07\") " Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.521595 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f9c91df-cb75-43ed-9677-9f2409edba07-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.522824 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9c91df-cb75-43ed-9677-9f2409edba07-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f9c91df-cb75-43ed-9677-9f2409edba07" (UID: "9f9c91df-cb75-43ed-9677-9f2409edba07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.532384 4895 generic.go:334] "Generic (PLEG): container finished" podID="e8203165-b01f-4f0d-907b-898ad878f5de" containerID="972d8b1f29680a2ef75191a9a6940be3f06d5e1f3378be9f73f44681e3d0989f" exitCode=0 Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.533079 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8203165-b01f-4f0d-907b-898ad878f5de","Type":"ContainerDied","Data":"972d8b1f29680a2ef75191a9a6940be3f06d5e1f3378be9f73f44681e3d0989f"} Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.535783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9c91df-cb75-43ed-9677-9f2409edba07-kube-api-access-j8zkh" (OuterVolumeSpecName: "kube-api-access-j8zkh") pod "9f9c91df-cb75-43ed-9677-9f2409edba07" (UID: "9f9c91df-cb75-43ed-9677-9f2409edba07"). InnerVolumeSpecName "kube-api-access-j8zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.555398 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" event={"ID":"9f9c91df-cb75-43ed-9677-9f2409edba07","Type":"ContainerDied","Data":"a352ba67e26f113fbae1c86deb376b14fca8a8c470ba40490fd40454db411a86"} Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.555502 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a352ba67e26f113fbae1c86deb376b14fca8a8c470ba40490fd40454db411a86" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.555640 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.611114 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp"] Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.618659 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-khndp"] Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.622983 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f9c91df-cb75-43ed-9677-9f2409edba07-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.623276 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8zkh\" (UniqueName: \"kubernetes.io/projected/9f9c91df-cb75-43ed-9677-9f2409edba07-kube-api-access-j8zkh\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.928675 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dzrsj"] Dec 06 07:00:01 crc kubenswrapper[4895]: I1206 07:00:01.939837 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q"] Dec 06 07:00:01 crc kubenswrapper[4895]: W1206 07:00:01.944324 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c72bd78_81d3_48dc_96c3_50bc6bac88d6.slice/crio-fc15f915687f890f0259d9a023f6f62429348a469620bc031f99c0a4df6a8616 WatchSource:0}: Error finding container fc15f915687f890f0259d9a023f6f62429348a469620bc031f99c0a4df6a8616: Status 404 returned error can't find the container with id fc15f915687f890f0259d9a023f6f62429348a469620bc031f99c0a4df6a8616 Dec 06 07:00:01 crc kubenswrapper[4895]: W1206 07:00:01.962575 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod651dca05_74b9_4632_8e1f_6ceb648d2b23.slice/crio-76ed2cdc528759e0462551dfc637f8aca8d6e5660d61081dae787899144f80e1 WatchSource:0}: Error finding container 76ed2cdc528759e0462551dfc637f8aca8d6e5660d61081dae787899144f80e1: Status 404 returned error can't find the container with id 76ed2cdc528759e0462551dfc637f8aca8d6e5660d61081dae787899144f80e1 Dec 06 07:00:02 crc kubenswrapper[4895]: I1206 07:00:02.066295 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9c91df-cb75-43ed-9677-9f2409edba07" path="/var/lib/kubelet/pods/9f9c91df-cb75-43ed-9677-9f2409edba07/volumes" Dec 06 07:00:02 crc kubenswrapper[4895]: I1206 07:00:02.230964 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 07:00:02 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 07:00:02 crc kubenswrapper[4895]: [+]process-running ok Dec 06 07:00:02 crc kubenswrapper[4895]: healthz check failed Dec 06 07:00:02 crc kubenswrapper[4895]: I1206 07:00:02.231033 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 07:00:02 crc kubenswrapper[4895]: I1206 07:00:02.582431 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" event={"ID":"2c72bd78-81d3-48dc-96c3-50bc6bac88d6","Type":"ContainerStarted","Data":"fc15f915687f890f0259d9a023f6f62429348a469620bc031f99c0a4df6a8616"} Dec 06 07:00:02 crc kubenswrapper[4895]: I1206 07:00:02.587625 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" event={"ID":"651dca05-74b9-4632-8e1f-6ceb648d2b23","Type":"ContainerStarted","Data":"76ed2cdc528759e0462551dfc637f8aca8d6e5660d61081dae787899144f80e1"} Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.087711 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.233428 4895 patch_prober.go:28] interesting pod/router-default-5444994796-2xrjt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 07:00:03 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Dec 06 07:00:03 crc kubenswrapper[4895]: [+]process-running ok Dec 06 07:00:03 crc kubenswrapper[4895]: healthz check failed Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.233531 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2xrjt" podUID="c2248ee7-0953-48b0-bcaf-e95d8560c4b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.294906 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8203165-b01f-4f0d-907b-898ad878f5de-kube-api-access\") pod \"e8203165-b01f-4f0d-907b-898ad878f5de\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.295003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8203165-b01f-4f0d-907b-898ad878f5de-kubelet-dir\") pod \"e8203165-b01f-4f0d-907b-898ad878f5de\" (UID: \"e8203165-b01f-4f0d-907b-898ad878f5de\") " Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.297526 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8203165-b01f-4f0d-907b-898ad878f5de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e8203165-b01f-4f0d-907b-898ad878f5de" (UID: "e8203165-b01f-4f0d-907b-898ad878f5de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.325639 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8203165-b01f-4f0d-907b-898ad878f5de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e8203165-b01f-4f0d-907b-898ad878f5de" (UID: "e8203165-b01f-4f0d-907b-898ad878f5de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.409629 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8203165-b01f-4f0d-907b-898ad878f5de-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.410213 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8203165-b01f-4f0d-907b-898ad878f5de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.490985 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sqdms" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.614266 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.615064 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8203165-b01f-4f0d-907b-898ad878f5de","Type":"ContainerDied","Data":"0ebb119588f6f8509921493464bec6083583ed19bd9965837e08af6b20db0c8c"} Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.615132 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ebb119588f6f8509921493464bec6083583ed19bd9965837e08af6b20db0c8c" Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.620061 4895 generic.go:334] "Generic (PLEG): container finished" podID="651dca05-74b9-4632-8e1f-6ceb648d2b23" containerID="d4ad5fe27a9a153b6e7e2252e9bfd84e15490a5eeb914fda265608901b678a5a" exitCode=0 Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.620149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" event={"ID":"651dca05-74b9-4632-8e1f-6ceb648d2b23","Type":"ContainerDied","Data":"d4ad5fe27a9a153b6e7e2252e9bfd84e15490a5eeb914fda265608901b678a5a"} Dec 06 07:00:03 crc kubenswrapper[4895]: I1206 07:00:03.633362 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" event={"ID":"2c72bd78-81d3-48dc-96c3-50bc6bac88d6","Type":"ContainerStarted","Data":"f493cb36ec1645c6d1b1ca223a0d4b8d526ae7c69e12c1e03cb690e606258c56"} Dec 06 07:00:04 crc kubenswrapper[4895]: I1206 07:00:04.311088 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 07:00:04 crc kubenswrapper[4895]: I1206 07:00:04.315132 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2xrjt" Dec 06 07:00:04 crc kubenswrapper[4895]: I1206 07:00:04.701461 4895 patch_prober.go:28] interesting pod/console-f9d7485db-52v65 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 06 07:00:04 crc kubenswrapper[4895]: I1206 07:00:04.702125 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-52v65" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.100235 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.100291 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.100364 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.100374 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.518396 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.538291 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qnrd2" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.660600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm658\" (UniqueName: \"kubernetes.io/projected/651dca05-74b9-4632-8e1f-6ceb648d2b23-kube-api-access-zm658\") pod \"651dca05-74b9-4632-8e1f-6ceb648d2b23\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.660692 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/651dca05-74b9-4632-8e1f-6ceb648d2b23-config-volume\") pod \"651dca05-74b9-4632-8e1f-6ceb648d2b23\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.660724 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/651dca05-74b9-4632-8e1f-6ceb648d2b23-secret-volume\") pod \"651dca05-74b9-4632-8e1f-6ceb648d2b23\" (UID: \"651dca05-74b9-4632-8e1f-6ceb648d2b23\") " Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.666969 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651dca05-74b9-4632-8e1f-6ceb648d2b23-config-volume" (OuterVolumeSpecName: "config-volume") pod "651dca05-74b9-4632-8e1f-6ceb648d2b23" (UID: "651dca05-74b9-4632-8e1f-6ceb648d2b23"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.720255 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651dca05-74b9-4632-8e1f-6ceb648d2b23-kube-api-access-zm658" (OuterVolumeSpecName: "kube-api-access-zm658") pod "651dca05-74b9-4632-8e1f-6ceb648d2b23" (UID: "651dca05-74b9-4632-8e1f-6ceb648d2b23"). InnerVolumeSpecName "kube-api-access-zm658". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.722816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651dca05-74b9-4632-8e1f-6ceb648d2b23-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "651dca05-74b9-4632-8e1f-6ceb648d2b23" (UID: "651dca05-74b9-4632-8e1f-6ceb648d2b23"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.755141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" event={"ID":"651dca05-74b9-4632-8e1f-6ceb648d2b23","Type":"ContainerDied","Data":"76ed2cdc528759e0462551dfc637f8aca8d6e5660d61081dae787899144f80e1"} Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.755201 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76ed2cdc528759e0462551dfc637f8aca8d6e5660d61081dae787899144f80e1" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.755283 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.772092 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm658\" (UniqueName: \"kubernetes.io/projected/651dca05-74b9-4632-8e1f-6ceb648d2b23-kube-api-access-zm658\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.772133 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/651dca05-74b9-4632-8e1f-6ceb648d2b23-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.772146 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/651dca05-74b9-4632-8e1f-6ceb648d2b23-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:05 crc kubenswrapper[4895]: I1206 07:00:05.775071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dzrsj" event={"ID":"2c72bd78-81d3-48dc-96c3-50bc6bac88d6","Type":"ContainerStarted","Data":"4ca12fff04681abb4eea5f7f9b8006071531c1625d8ac8412be322f7e04c9f74"} Dec 06 07:00:06 crc kubenswrapper[4895]: I1206 07:00:06.564931 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dzrsj" podStartSLOduration=149.564872266 podStartE2EDuration="2m29.564872266s" podCreationTimestamp="2025-12-06 06:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:00:05.801189487 +0000 UTC m=+168.202578367" watchObservedRunningTime="2025-12-06 07:00:06.564872266 +0000 UTC m=+168.966261136" Dec 06 07:00:14 crc kubenswrapper[4895]: I1206 07:00:14.710361 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-52v65" Dec 06 07:00:14 crc kubenswrapper[4895]: I1206 07:00:14.715254 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-52v65" Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.096979 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.097580 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.098049 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.098119 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.098171 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.098846 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.098896 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.098908 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"bce2d9c18fc42d732011d498f1549896dcc9b1927b463271fe317f5a2b202e50"} pod="openshift-console/downloads-7954f5f757-lddxp" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 06 07:00:15 crc kubenswrapper[4895]: I1206 07:00:15.099007 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" containerID="cri-o://bce2d9c18fc42d732011d498f1549896dcc9b1927b463271fe317f5a2b202e50" gracePeriod=2 Dec 06 07:00:17 crc kubenswrapper[4895]: I1206 07:00:17.004827 4895 generic.go:334] "Generic (PLEG): container finished" podID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerID="bce2d9c18fc42d732011d498f1549896dcc9b1927b463271fe317f5a2b202e50" exitCode=0 Dec 06 07:00:17 crc kubenswrapper[4895]: I1206 07:00:17.004886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lddxp" event={"ID":"f740e915-a41a-4bfb-a4fa-1b33903fecd6","Type":"ContainerDied","Data":"bce2d9c18fc42d732011d498f1549896dcc9b1927b463271fe317f5a2b202e50"} Dec 06 07:00:19 crc kubenswrapper[4895]: I1206 07:00:19.176348 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 07:00:24 crc kubenswrapper[4895]: I1206 07:00:24.974143 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 07:00:25 crc kubenswrapper[4895]: I1206 07:00:25.097653 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:25 crc kubenswrapper[4895]: I1206 07:00:25.097730 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:25 crc kubenswrapper[4895]: I1206 07:00:25.174522 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g4fx" Dec 06 07:00:29 crc kubenswrapper[4895]: I1206 07:00:29.696383 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:00:29 crc kubenswrapper[4895]: I1206 07:00:29.696759 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.088631 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 07:00:32 crc kubenswrapper[4895]: E1206 07:00:32.088930 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651dca05-74b9-4632-8e1f-6ceb648d2b23" containerName="collect-profiles" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.088950 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="651dca05-74b9-4632-8e1f-6ceb648d2b23" containerName="collect-profiles" Dec 06 07:00:32 crc kubenswrapper[4895]: E1206 07:00:32.088964 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9c91df-cb75-43ed-9677-9f2409edba07" containerName="collect-profiles" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.088974 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9c91df-cb75-43ed-9677-9f2409edba07" containerName="collect-profiles" Dec 06 07:00:32 crc kubenswrapper[4895]: E1206 07:00:32.088998 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8203165-b01f-4f0d-907b-898ad878f5de" containerName="pruner" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.089008 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8203165-b01f-4f0d-907b-898ad878f5de" containerName="pruner" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.089146 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="651dca05-74b9-4632-8e1f-6ceb648d2b23" containerName="collect-profiles" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.089165 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8203165-b01f-4f0d-907b-898ad878f5de" containerName="pruner" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.089176 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9c91df-cb75-43ed-9677-9f2409edba07" containerName="collect-profiles" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.089642 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.093673 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.093789 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.098921 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.164920 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a1bb89-7730-4593-8868-fcb642fe155f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.165071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a1bb89-7730-4593-8868-fcb642fe155f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.266419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a1bb89-7730-4593-8868-fcb642fe155f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.266637 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a1bb89-7730-4593-8868-fcb642fe155f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.266708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a1bb89-7730-4593-8868-fcb642fe155f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.292907 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a1bb89-7730-4593-8868-fcb642fe155f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:32 crc kubenswrapper[4895]: I1206 07:00:32.413538 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:00:35 crc kubenswrapper[4895]: I1206 07:00:35.100194 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:35 crc kubenswrapper[4895]: I1206 07:00:35.100279 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:37 crc kubenswrapper[4895]: I1206 07:00:37.881621 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 07:00:37 crc kubenswrapper[4895]: I1206 07:00:37.882677 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:37 crc kubenswrapper[4895]: I1206 07:00:37.894564 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 07:00:37 crc kubenswrapper[4895]: I1206 07:00:37.944216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kube-api-access\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:37 crc kubenswrapper[4895]: I1206 07:00:37.944289 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-var-lock\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:37 crc kubenswrapper[4895]: I1206 07:00:37.944326 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.045868 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kube-api-access\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.045940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-var-lock\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.045968 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.046088 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.046102 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-var-lock\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.068505 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kube-api-access\") pod \"installer-9-crc\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:38 crc kubenswrapper[4895]: I1206 07:00:38.200699 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:00:45 crc kubenswrapper[4895]: I1206 07:00:45.097413 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:45 crc kubenswrapper[4895]: I1206 07:00:45.098138 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:51 crc kubenswrapper[4895]: E1206 07:00:51.397410 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 06 07:00:51 crc kubenswrapper[4895]: E1206 07:00:51.398223 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8gg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-stqzm_openshift-marketplace(e559f522-c700-4e90-82c5-6f2643185c9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:00:51 crc kubenswrapper[4895]: E1206 07:00:51.399439 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-stqzm" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" Dec 06 07:00:55 crc kubenswrapper[4895]: I1206 07:00:55.097716 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:00:55 crc kubenswrapper[4895]: I1206 07:00:55.098215 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:00:59 crc kubenswrapper[4895]: I1206 07:00:59.696257 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:00:59 crc kubenswrapper[4895]: I1206 07:00:59.696622 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:00:59 crc kubenswrapper[4895]: I1206 07:00:59.696674 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:00:59 crc kubenswrapper[4895]: I1206 07:00:59.697253 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:00:59 crc kubenswrapper[4895]: I1206 07:00:59.697301 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57" gracePeriod=600 Dec 06 07:01:02 crc kubenswrapper[4895]: E1206 07:01:02.252114 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 06 07:01:02 crc kubenswrapper[4895]: E1206 07:01:02.253076 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f55kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9ddnd_openshift-marketplace(7424c4ea-3bfe-4af7-afc3-e2ed98d36e75): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:02 crc kubenswrapper[4895]: E1206 07:01:02.254512 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9ddnd" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" Dec 06 07:01:02 crc kubenswrapper[4895]: I1206 07:01:02.432224 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57" exitCode=0 Dec 06 07:01:02 crc kubenswrapper[4895]: I1206 07:01:02.432294 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57"} Dec 06 07:01:03 crc kubenswrapper[4895]: E1206 07:01:03.932909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9ddnd" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.015741 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.015911 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rs9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2j9kw_openshift-marketplace(ad3d9f3b-cc45-4169-9476-b15937334205): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.017271 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2j9kw" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.031570 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.031759 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dlp2v_openshift-marketplace(b5924c97-fec5-45b0-a9a2-8f851c88dfbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.032998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dlp2v" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.051649 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.051792 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zcb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xmdtv_openshift-marketplace(c57d6deb-02dd-473c-b644-7ef7a1f8e500): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.053004 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xmdtv" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.057666 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.057860 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plq9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6wtxh_openshift-marketplace(34e74e8b-145a-45bc-a163-4fbd502bc155): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:04 crc kubenswrapper[4895]: E1206 07:01:04.059119 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6wtxh" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" Dec 06 07:01:05 crc kubenswrapper[4895]: I1206 07:01:05.097386 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:01:05 crc kubenswrapper[4895]: I1206 07:01:05.097521 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.740750 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xmdtv" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.740786 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dlp2v" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.741439 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2j9kw" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.741624 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6wtxh" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.835846 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.836602 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8552,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zwn6t_openshift-marketplace(c6ec729e-b8e3-42ad-84a0-ded336274afd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.837842 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zwn6t" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.965711 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.966059 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwq76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z8gpr_openshift-marketplace(dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:01:05 crc kubenswrapper[4895]: E1206 07:01:05.967223 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-z8gpr" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.070311 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 07:01:06 crc kubenswrapper[4895]: W1206 07:01:06.214194 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb1a1bb89_7730_4593_8868_fcb642fe155f.slice/crio-780a28d55dd0cc01d01ef7cb7167bb9fc019dc127333569800472275245b5085 WatchSource:0}: Error finding container 780a28d55dd0cc01d01ef7cb7167bb9fc019dc127333569800472275245b5085: Status 404 returned error can't find the container with id 780a28d55dd0cc01d01ef7cb7167bb9fc019dc127333569800472275245b5085 Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.217184 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.470912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"c18110f799eaeab538c5a5c8ddde83eb9d7b4f8f79d73523c855405598e83192"} Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.473827 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerStarted","Data":"3bada894d8362e7d72580d12b9e54b50277ee6081acdcf0e5d2bf306910bc8ee"} Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.475291 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d916d0cd-06ea-488b-a8aa-cd43f0069b13","Type":"ContainerStarted","Data":"061ea0f536f2caacb00633dce1dde0f80b282024e9a9853374f66af332281a9e"} Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.477545 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a1bb89-7730-4593-8868-fcb642fe155f","Type":"ContainerStarted","Data":"780a28d55dd0cc01d01ef7cb7167bb9fc019dc127333569800472275245b5085"} Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.481006 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lddxp" event={"ID":"f740e915-a41a-4bfb-a4fa-1b33903fecd6","Type":"ContainerStarted","Data":"6cbb2c49377e9486cbf80d2aae760ee1a54845df9ff45780936550f40cd0d09e"} Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.481448 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.481661 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:01:06 crc kubenswrapper[4895]: I1206 07:01:06.481708 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:01:06 crc kubenswrapper[4895]: E1206 07:01:06.482828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zwn6t" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" Dec 06 07:01:06 crc kubenswrapper[4895]: E1206 07:01:06.483056 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z8gpr" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.491629 4895 generic.go:334] "Generic (PLEG): container finished" podID="e559f522-c700-4e90-82c5-6f2643185c9a" containerID="3bada894d8362e7d72580d12b9e54b50277ee6081acdcf0e5d2bf306910bc8ee" exitCode=0 Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.492098 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerDied","Data":"3bada894d8362e7d72580d12b9e54b50277ee6081acdcf0e5d2bf306910bc8ee"} Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.681860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d916d0cd-06ea-488b-a8aa-cd43f0069b13","Type":"ContainerStarted","Data":"8e9ae69bddd5287422feb2f8f4e0e20b35effca6e1bfd97dd4e056dfc606c504"} Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.693336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a1bb89-7730-4593-8868-fcb642fe155f","Type":"ContainerStarted","Data":"0584889e0379a35337329965d2bbbaf8177765a1b34142e4cbf7b83bb50c02b1"} Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.693971 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.694021 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.717223 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=30.717203863 podStartE2EDuration="30.717203863s" podCreationTimestamp="2025-12-06 07:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:01:07.715275011 +0000 UTC m=+230.116663871" watchObservedRunningTime="2025-12-06 07:01:07.717203863 +0000 UTC m=+230.118592733" Dec 06 07:01:07 crc kubenswrapper[4895]: I1206 07:01:07.751165 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=35.751144836 podStartE2EDuration="35.751144836s" podCreationTimestamp="2025-12-06 07:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:01:07.748207963 +0000 UTC m=+230.149596833" watchObservedRunningTime="2025-12-06 07:01:07.751144836 +0000 UTC m=+230.152533706" Dec 06 07:01:08 crc kubenswrapper[4895]: I1206 07:01:08.702974 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerStarted","Data":"60386c6aca44eb306d1e4a6c5908572f074b84ef399d8d90d26e28ea69ecbc70"} Dec 06 07:01:08 crc kubenswrapper[4895]: I1206 07:01:08.705318 4895 generic.go:334] "Generic (PLEG): container finished" podID="b1a1bb89-7730-4593-8868-fcb642fe155f" containerID="0584889e0379a35337329965d2bbbaf8177765a1b34142e4cbf7b83bb50c02b1" exitCode=0 Dec 06 07:01:08 crc kubenswrapper[4895]: I1206 07:01:08.705399 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a1bb89-7730-4593-8868-fcb642fe155f","Type":"ContainerDied","Data":"0584889e0379a35337329965d2bbbaf8177765a1b34142e4cbf7b83bb50c02b1"} Dec 06 07:01:08 crc kubenswrapper[4895]: I1206 07:01:08.732374 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-stqzm" podStartSLOduration=4.800018912 podStartE2EDuration="1m13.732352791s" podCreationTimestamp="2025-12-06 06:59:55 +0000 UTC" firstStartedPulling="2025-12-06 06:59:59.269943446 +0000 UTC m=+161.671332316" lastFinishedPulling="2025-12-06 07:01:08.202277325 +0000 UTC m=+230.603666195" observedRunningTime="2025-12-06 07:01:08.728668004 +0000 UTC m=+231.130056874" watchObservedRunningTime="2025-12-06 07:01:08.732352791 +0000 UTC m=+231.133741661" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.056643 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.168456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a1bb89-7730-4593-8868-fcb642fe155f-kubelet-dir\") pod \"b1a1bb89-7730-4593-8868-fcb642fe155f\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.168535 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a1bb89-7730-4593-8868-fcb642fe155f-kube-api-access\") pod \"b1a1bb89-7730-4593-8868-fcb642fe155f\" (UID: \"b1a1bb89-7730-4593-8868-fcb642fe155f\") " Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.168632 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1a1bb89-7730-4593-8868-fcb642fe155f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1a1bb89-7730-4593-8868-fcb642fe155f" (UID: "b1a1bb89-7730-4593-8868-fcb642fe155f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.169400 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a1bb89-7730-4593-8868-fcb642fe155f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.176230 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a1bb89-7730-4593-8868-fcb642fe155f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1a1bb89-7730-4593-8868-fcb642fe155f" (UID: "b1a1bb89-7730-4593-8868-fcb642fe155f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.270264 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a1bb89-7730-4593-8868-fcb642fe155f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.732334 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a1bb89-7730-4593-8868-fcb642fe155f","Type":"ContainerDied","Data":"780a28d55dd0cc01d01ef7cb7167bb9fc019dc127333569800472275245b5085"} Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.732385 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780a28d55dd0cc01d01ef7cb7167bb9fc019dc127333569800472275245b5085" Dec 06 07:01:10 crc kubenswrapper[4895]: I1206 07:01:10.732464 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 07:01:15 crc kubenswrapper[4895]: I1206 07:01:15.097881 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:01:15 crc kubenswrapper[4895]: I1206 07:01:15.097881 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-lddxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 06 07:01:15 crc kubenswrapper[4895]: I1206 07:01:15.098291 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:01:15 crc kubenswrapper[4895]: I1206 07:01:15.098365 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lddxp" podUID="f740e915-a41a-4bfb-a4fa-1b33903fecd6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 06 07:01:16 crc kubenswrapper[4895]: I1206 07:01:16.094409 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 07:01:16 crc kubenswrapper[4895]: I1206 07:01:16.094553 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 07:01:16 crc kubenswrapper[4895]: I1206 07:01:16.672110 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 07:01:16 crc kubenswrapper[4895]: I1206 07:01:16.814353 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 07:01:16 crc kubenswrapper[4895]: I1206 07:01:16.901290 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqzm"] Dec 06 07:01:18 crc kubenswrapper[4895]: I1206 07:01:18.778684 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-stqzm" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="registry-server" containerID="cri-o://60386c6aca44eb306d1e4a6c5908572f074b84ef399d8d90d26e28ea69ecbc70" gracePeriod=2 Dec 06 07:01:19 crc kubenswrapper[4895]: I1206 07:01:19.787706 4895 generic.go:334] "Generic (PLEG): container finished" podID="e559f522-c700-4e90-82c5-6f2643185c9a" containerID="60386c6aca44eb306d1e4a6c5908572f074b84ef399d8d90d26e28ea69ecbc70" exitCode=0 Dec 06 07:01:19 crc kubenswrapper[4895]: I1206 07:01:19.787808 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerDied","Data":"60386c6aca44eb306d1e4a6c5908572f074b84ef399d8d90d26e28ea69ecbc70"} Dec 06 07:01:21 crc kubenswrapper[4895]: I1206 07:01:21.927756 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.046883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-utilities\") pod \"e559f522-c700-4e90-82c5-6f2643185c9a\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.046944 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-catalog-content\") pod \"e559f522-c700-4e90-82c5-6f2643185c9a\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.047055 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8gg4\" (UniqueName: \"kubernetes.io/projected/e559f522-c700-4e90-82c5-6f2643185c9a-kube-api-access-k8gg4\") pod \"e559f522-c700-4e90-82c5-6f2643185c9a\" (UID: \"e559f522-c700-4e90-82c5-6f2643185c9a\") " Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.048213 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-utilities" (OuterVolumeSpecName: "utilities") pod "e559f522-c700-4e90-82c5-6f2643185c9a" (UID: "e559f522-c700-4e90-82c5-6f2643185c9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.053043 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e559f522-c700-4e90-82c5-6f2643185c9a-kube-api-access-k8gg4" (OuterVolumeSpecName: "kube-api-access-k8gg4") pod "e559f522-c700-4e90-82c5-6f2643185c9a" (UID: "e559f522-c700-4e90-82c5-6f2643185c9a"). InnerVolumeSpecName "kube-api-access-k8gg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.068589 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e559f522-c700-4e90-82c5-6f2643185c9a" (UID: "e559f522-c700-4e90-82c5-6f2643185c9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.150631 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8gg4\" (UniqueName: \"kubernetes.io/projected/e559f522-c700-4e90-82c5-6f2643185c9a-kube-api-access-k8gg4\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.150685 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.150704 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e559f522-c700-4e90-82c5-6f2643185c9a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.810083 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqzm" event={"ID":"e559f522-c700-4e90-82c5-6f2643185c9a","Type":"ContainerDied","Data":"8acdcffe42af80ddd71e4c8c8906a3acb46ac151ef09294613ed16cd17258f7a"} Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.810164 4895 scope.go:117] "RemoveContainer" containerID="60386c6aca44eb306d1e4a6c5908572f074b84ef399d8d90d26e28ea69ecbc70" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.810364 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqzm" Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.844381 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqzm"] Dec 06 07:01:22 crc kubenswrapper[4895]: I1206 07:01:22.848215 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqzm"] Dec 06 07:01:24 crc kubenswrapper[4895]: I1206 07:01:24.062050 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" path="/var/lib/kubelet/pods/e559f522-c700-4e90-82c5-6f2643185c9a/volumes" Dec 06 07:01:25 crc kubenswrapper[4895]: I1206 07:01:25.106959 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-lddxp" Dec 06 07:01:32 crc kubenswrapper[4895]: I1206 07:01:32.598594 4895 scope.go:117] "RemoveContainer" containerID="3bada894d8362e7d72580d12b9e54b50277ee6081acdcf0e5d2bf306910bc8ee" Dec 06 07:01:32 crc kubenswrapper[4895]: I1206 07:01:32.755693 4895 scope.go:117] "RemoveContainer" containerID="476c3c519b3826255c1b289373e089e492574c06d3d0e454d537ebef13baac8c" Dec 06 07:01:34 crc kubenswrapper[4895]: I1206 07:01:34.258521 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ggcv2"] Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.203854 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.204922 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="extract-content" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.204942 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="extract-content" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.204955 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="extract-utilities" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.204964 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="extract-utilities" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.204983 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="registry-server" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.204989 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="registry-server" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.205003 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a1bb89-7730-4593-8868-fcb642fe155f" containerName="pruner" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205009 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a1bb89-7730-4593-8868-fcb642fe155f" containerName="pruner" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205139 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a1bb89-7730-4593-8868-fcb642fe155f" containerName="pruner" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205157 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e559f522-c700-4e90-82c5-6f2643185c9a" containerName="registry-server" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205602 4895 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205815 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205887 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999" gracePeriod=15 Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205944 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206021 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954" gracePeriod=15 Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.205924 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf" gracePeriod=15 Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206007 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4" gracePeriod=15 Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206050 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f" gracePeriod=15 Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206429 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206446 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206455 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206463 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206514 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206523 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206535 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206543 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206554 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206561 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206571 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206579 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 07:01:44 crc kubenswrapper[4895]: E1206 07:01:44.206593 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206604 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206717 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206726 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206737 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206745 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206754 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.206763 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.253533 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.362193 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.362266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.362311 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.362681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.362812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.362857 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.363022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.363137 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464238 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464280 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464329 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464352 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464466 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464526 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464575 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464599 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464620 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464643 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.464669 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:44 crc kubenswrapper[4895]: I1206 07:01:44.549415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:01:48 crc kubenswrapper[4895]: I1206 07:01:48.057666 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.313365 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.132:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-zwn6t.187e8e406bb31616 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-zwn6t,UID:c6ec729e-b8e3-42ad-84a0-ded336274afd,APIVersion:v1,ResourceVersion:28237,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.257s (30.257s including waiting). Image size: 1222075732 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 07:01:51.311762966 +0000 UTC m=+273.713151836,LastTimestamp:2025-12-06 07:01:51.311762966 +0000 UTC m=+273.713151836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.662558 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.663838 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.664178 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:51 crc kubenswrapper[4895]: I1206 07:01:51.664343 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.664685 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.664941 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:51 crc kubenswrapper[4895]: I1206 07:01:51.664971 4895 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.665294 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="200ms" Dec 06 07:01:51 crc kubenswrapper[4895]: I1206 07:01:51.669077 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.719054 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.132:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-zwn6t.187e8e406bb31616 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-zwn6t,UID:c6ec729e-b8e3-42ad-84a0-ded336274afd,APIVersion:v1,ResourceVersion:28237,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.257s (30.257s including waiting). Image size: 1222075732 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 07:01:51.311762966 +0000 UTC m=+273.713151836,LastTimestamp:2025-12-06 07:01:51.311762966 +0000 UTC m=+273.713151836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 07:01:51 crc kubenswrapper[4895]: E1206 07:01:51.867167 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="400ms" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.182565 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf" exitCode=0 Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.182617 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954" exitCode=0 Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.182629 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4" exitCode=0 Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.182639 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f" exitCode=2 Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.182692 4895 scope.go:117] "RemoveContainer" containerID="672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.207326 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.208390 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.209635 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.210367 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:52 crc kubenswrapper[4895]: E1206 07:01:52.268410 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="800ms" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.375293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.375363 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.375442 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.375520 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.375539 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.375701 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.376211 4895 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.376235 4895 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:52 crc kubenswrapper[4895]: I1206 07:01:52.376248 4895 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:53 crc kubenswrapper[4895]: E1206 07:01:53.069550 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="1.6s" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.194098 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.195206 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999" exitCode=0 Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.195309 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.198087 4895 generic.go:334] "Generic (PLEG): container finished" podID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" containerID="8e9ae69bddd5287422feb2f8f4e0e20b35effca6e1bfd97dd4e056dfc606c504" exitCode=0 Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.198146 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d916d0cd-06ea-488b-a8aa-cd43f0069b13","Type":"ContainerDied","Data":"8e9ae69bddd5287422feb2f8f4e0e20b35effca6e1bfd97dd4e056dfc606c504"} Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.198993 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.199794 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.200687 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.210939 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.211560 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:53 crc kubenswrapper[4895]: I1206 07:01:53.211912 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.058202 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.437884 4895 scope.go:117] "RemoveContainer" containerID="0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.612676 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.613649 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.614127 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.671940 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="3.2s" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.705120 4895 scope.go:117] "RemoveContainer" containerID="672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.706060 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\": container with ID starting with 672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479 not found: ID does not exist" containerID="672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.706130 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479"} err="failed to get container status \"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\": rpc error: code = NotFound desc = could not find container \"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\": container with ID starting with 672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479 not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.706175 4895 scope.go:117] "RemoveContainer" containerID="699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.762679 4895 scope.go:117] "RemoveContainer" containerID="2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.790719 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:01:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:01:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:01:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:01:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15adb3b2133604b064893f8009a74145e4c8bb5b134d111346dcccbdd2aa9bc2\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:164fc35a19aa6cc886c8015c8ee3eba4895e76b1152cb9d795e4f3154a8533a3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1610512706},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:544a01170a4aa6cf8322d5bffa5817113efd696e3c3e9bac6a29d2da9f9451e5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:67f42a86b99b69b357285a6845977f967e6c825de2049c19620a78eaf99cebf3\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1222075732},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:3ed403bbcdfb639292182990019eb8534e3bd23e3e34b9dcfc80787fec74b49f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7a807fc4e9e777d39e6248ec7b91e4c24fc8778eeb1bfae25ca162d0302e12bb\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201958734},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1b1026c62413fa239fa4ff6541fe8bda656c1281867ad6ee2c848feccb13c97e\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b633ebdc901d19290af4dc2d09e2b59c504c0fc15a3fba410b0ce098e2d5753\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1141987142},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.791515 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.791697 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.791908 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.792096 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.792114 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.800198 4895 scope.go:117] "RemoveContainer" containerID="c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.811273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kube-api-access\") pod \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.811323 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kubelet-dir\") pod \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.811362 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-var-lock\") pod \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\" (UID: \"d916d0cd-06ea-488b-a8aa-cd43f0069b13\") " Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.811562 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-var-lock" (OuterVolumeSpecName: "var-lock") pod "d916d0cd-06ea-488b-a8aa-cd43f0069b13" (UID: "d916d0cd-06ea-488b-a8aa-cd43f0069b13"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.811961 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d916d0cd-06ea-488b-a8aa-cd43f0069b13" (UID: "d916d0cd-06ea-488b-a8aa-cd43f0069b13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.822943 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d916d0cd-06ea-488b-a8aa-cd43f0069b13" (UID: "d916d0cd-06ea-488b-a8aa-cd43f0069b13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.852652 4895 scope.go:117] "RemoveContainer" containerID="b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.893831 4895 scope.go:117] "RemoveContainer" containerID="6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.912278 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.912303 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.912314 4895 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d916d0cd-06ea-488b-a8aa-cd43f0069b13-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.918398 4895 scope.go:117] "RemoveContainer" containerID="0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.918966 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\": container with ID starting with 0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf not found: ID does not exist" containerID="0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.919012 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf"} err="failed to get container status \"0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\": rpc error: code = NotFound desc = could not find container \"0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf\": container with ID starting with 0f8faec219511892e8e350605d5cee61b31b317381e6bf4accab594746212dbf not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.919035 4895 scope.go:117] "RemoveContainer" containerID="672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.919631 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479"} err="failed to get container status \"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\": rpc error: code = NotFound desc = could not find container \"672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479\": container with ID starting with 672be2e5cf6e573363dffe6b3e81abb5bb666bc6cbef66177960a5d581a8b479 not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.919675 4895 scope.go:117] "RemoveContainer" containerID="699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.920319 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\": container with ID starting with 699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954 not found: ID does not exist" containerID="699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.920347 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954"} err="failed to get container status \"699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\": rpc error: code = NotFound desc = could not find container \"699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954\": container with ID starting with 699badb926bbc63b0cd58230d325e27cce6328e4c783a421bbd501a00e6af954 not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.920362 4895 scope.go:117] "RemoveContainer" containerID="2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.920617 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\": container with ID starting with 2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4 not found: ID does not exist" containerID="2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.920639 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4"} err="failed to get container status \"2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\": rpc error: code = NotFound desc = could not find container \"2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4\": container with ID starting with 2b60f1327b52ad2843106708401a9946303535202c9a26bb860afad1aa3080a4 not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.920652 4895 scope.go:117] "RemoveContainer" containerID="c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.921138 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\": container with ID starting with c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f not found: ID does not exist" containerID="c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.921161 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f"} err="failed to get container status \"c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\": rpc error: code = NotFound desc = could not find container \"c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f\": container with ID starting with c9bf09a3bae2fa23ac6aa4c199f79a2c597bd01cce3510b63b0f026dff08ee1f not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.921174 4895 scope.go:117] "RemoveContainer" containerID="b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.921738 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\": container with ID starting with b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999 not found: ID does not exist" containerID="b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.921791 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999"} err="failed to get container status \"b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\": rpc error: code = NotFound desc = could not find container \"b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999\": container with ID starting with b57a29db64dc27a03c2687e2c92f76635a547200b0660b875ba776a5342aa999 not found: ID does not exist" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.921809 4895 scope.go:117] "RemoveContainer" containerID="6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2" Dec 06 07:01:54 crc kubenswrapper[4895]: E1206 07:01:54.922083 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\": container with ID starting with 6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2 not found: ID does not exist" containerID="6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2" Dec 06 07:01:54 crc kubenswrapper[4895]: I1206 07:01:54.922111 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2"} err="failed to get container status \"6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\": rpc error: code = NotFound desc = could not find container \"6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2\": container with ID starting with 6e8caa27359130860fa3cb86884c1e3cea4cd90beeeedaeabf50a36510edc4d2 not found: ID does not exist" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.219818 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.220176 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d916d0cd-06ea-488b-a8aa-cd43f0069b13","Type":"ContainerDied","Data":"061ea0f536f2caacb00633dce1dde0f80b282024e9a9853374f66af332281a9e"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.220574 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="061ea0f536f2caacb00633dce1dde0f80b282024e9a9853374f66af332281a9e" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.231726 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0ccfc93dd4611812b103c51840b3ea5c0536e4e1e726c91724662e6411fc743b"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.235775 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.236460 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.239518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerStarted","Data":"b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.245200 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerStarted","Data":"03b0c7b7187aa2f9342894a9e567a35de43977b2ffd8b9d1c58f469e6f967eb6"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.252408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerStarted","Data":"419f4b441bb996779e15606f1a7cf9543e782abe52f4fb95335d53d92b68b572"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.257523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerStarted","Data":"4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.259191 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.259455 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.260592 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerStarted","Data":"4d1dca11ac49e7119a5d20d66e05dd31597629ff2edfc131a9d5c3622146e54d"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.261062 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.262807 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.263338 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.263603 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.264039 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.265307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerStarted","Data":"42d87fe5ad9f66e22b70125fc95763264186389d20120f67b49bb9f62839c98c"} Dec 06 07:01:55 crc kubenswrapper[4895]: I1206 07:01:55.267834 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerStarted","Data":"76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.275351 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad3d9f3b-cc45-4169-9476-b15937334205" containerID="03b0c7b7187aa2f9342894a9e567a35de43977b2ffd8b9d1c58f469e6f967eb6" exitCode=0 Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.275427 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerDied","Data":"03b0c7b7187aa2f9342894a9e567a35de43977b2ffd8b9d1c58f469e6f967eb6"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.276270 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.276712 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.276924 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.277209 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.277653 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.279536 4895 generic.go:334] "Generic (PLEG): container finished" podID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerID="419f4b441bb996779e15606f1a7cf9543e782abe52f4fb95335d53d92b68b572" exitCode=0 Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.279605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerDied","Data":"419f4b441bb996779e15606f1a7cf9543e782abe52f4fb95335d53d92b68b572"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.280351 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.280703 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.281135 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.281455 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.281845 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.282162 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.282548 4895 generic.go:334] "Generic (PLEG): container finished" podID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerID="42d87fe5ad9f66e22b70125fc95763264186389d20120f67b49bb9f62839c98c" exitCode=0 Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.282635 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerDied","Data":"42d87fe5ad9f66e22b70125fc95763264186389d20120f67b49bb9f62839c98c"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.283224 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.283459 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.283809 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.284104 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.284351 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.284586 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.284660 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"613e533fa9a0136c7c9aa8069af19fcdc8e37bfc91e3b7fb203dbe0f4b23941f"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.284877 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.285298 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.285644 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.285836 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.286012 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.286184 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.286442 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.286780 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.289609 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerID="b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411" exitCode=0 Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.289671 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerDied","Data":"b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.290203 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.290413 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.291244 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.291604 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.292402 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.292843 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.293093 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.293276 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.293992 4895 generic.go:334] "Generic (PLEG): container finished" podID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerID="76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d" exitCode=0 Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.294033 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerDied","Data":"76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d"} Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.294731 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.295068 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.295405 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.295775 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.296116 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.296353 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.296639 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.299195 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:56 crc kubenswrapper[4895]: I1206 07:01:56.299921 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.301170 4895 generic.go:334] "Generic (PLEG): container finished" podID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerID="4d1dca11ac49e7119a5d20d66e05dd31597629ff2edfc131a9d5c3622146e54d" exitCode=0 Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.301322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerDied","Data":"4d1dca11ac49e7119a5d20d66e05dd31597629ff2edfc131a9d5c3622146e54d"} Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.302361 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.302973 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.303752 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304039 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerDied","Data":"4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd"} Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304037 4895 generic.go:334] "Generic (PLEG): container finished" podID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerID="4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd" exitCode=0 Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304220 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304415 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304679 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.304923 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.305174 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.305586 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.306016 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.306273 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.306633 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.307116 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.307447 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.308224 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.308431 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: I1206 07:01:57.308717 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:57 crc kubenswrapper[4895]: E1206 07:01:57.873058 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="6.4s" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.052800 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.053424 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.053846 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.054327 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.054560 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.054726 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.055038 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.055337 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:58 crc kubenswrapper[4895]: I1206 07:01:58.055787 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.049917 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.051307 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.052355 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.053120 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.053657 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.053983 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.054337 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.054748 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.055058 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.055585 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.068097 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.068157 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:01:59 crc kubenswrapper[4895]: E1206 07:01:59.069003 4895 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.069569 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:01:59 crc kubenswrapper[4895]: I1206 07:01:59.466536 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" podUID="688a9d65-4700-47fb-a150-723f9c21b054" containerName="oauth-openshift" containerID="cri-o://b7a78b341c59e943f393b1823b2aecb69f477d7be1f1a4d1dff9a1c8b191f267" gracePeriod=15 Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:01:59.547289 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:01:59.547382 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:01.721096 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.132:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-zwn6t.187e8e406bb31616 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-zwn6t,UID:c6ec729e-b8e3-42ad-84a0-ded336274afd,APIVersion:v1,ResourceVersion:28237,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.257s (30.257s including waiting). Image size: 1222075732 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 07:01:51.311762966 +0000 UTC m=+273.713151836,LastTimestamp:2025-12-06 07:01:51.311762966 +0000 UTC m=+273.713151836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:03.853151 4895 generic.go:334] "Generic (PLEG): container finished" podID="688a9d65-4700-47fb-a150-723f9c21b054" containerID="b7a78b341c59e943f393b1823b2aecb69f477d7be1f1a4d1dff9a1c8b191f267" exitCode=0 Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:03.853251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" event={"ID":"688a9d65-4700-47fb-a150-723f9c21b054","Type":"ContainerDied","Data":"b7a78b341c59e943f393b1823b2aecb69f477d7be1f1a4d1dff9a1c8b191f267"} Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:04.274935 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" interval="7s" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.701383 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.702063 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.702322 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.702535 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.702723 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.702920 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.703108 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.703672 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.703946 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.704107 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.704362 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.796937 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/688a9d65-4700-47fb-a150-723f9c21b054-audit-dir\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797021 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-login\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797052 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688a9d65-4700-47fb-a150-723f9c21b054-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797100 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-idp-0-file-data\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797144 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-audit-policies\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797181 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-service-ca\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-cliconfig\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-ocp-branding-template\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797320 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-serving-cert\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-provider-selection\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797392 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-session\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797424 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-trusted-ca-bundle\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797464 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd5ws\" (UniqueName: \"kubernetes.io/projected/688a9d65-4700-47fb-a150-723f9c21b054-kube-api-access-rd5ws\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797529 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-router-certs\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797564 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-error\") pod \"688a9d65-4700-47fb-a150-723f9c21b054\" (UID: \"688a9d65-4700-47fb-a150-723f9c21b054\") " Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.797778 4895 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/688a9d65-4700-47fb-a150-723f9c21b054-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.798148 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.798698 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.799190 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.799346 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.804343 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.805728 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688a9d65-4700-47fb-a150-723f9c21b054-kube-api-access-rd5ws" (OuterVolumeSpecName: "kube-api-access-rd5ws") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "kube-api-access-rd5ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.805734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.805876 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.806090 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.806323 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.806736 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.806917 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.807089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "688a9d65-4700-47fb-a150-723f9c21b054" (UID: "688a9d65-4700-47fb-a150-723f9c21b054"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.863352 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.863410 4895 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79" exitCode=1 Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.863507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.864016 4895 scope.go:117] "RemoveContainer" containerID="000e499fcdc100b12984631ee37bafa13e9cf8cd1903ccf5badc6591fd727d79" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865067 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865302 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865590 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" event={"ID":"688a9d65-4700-47fb-a150-723f9c21b054","Type":"ContainerDied","Data":"d02e5283e94b86c942b8d0517d11a5b965c31a62c9ffcda616d1c90795539cf5"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865612 4895 scope.go:117] "RemoveContainer" containerID="b7a78b341c59e943f393b1823b2aecb69f477d7be1f1a4d1dff9a1c8b191f267" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865656 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865689 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.865872 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.866056 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.866435 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.868977 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.869180 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.869420 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.869650 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.869853 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.870207 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.870444 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.870676 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.870929 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.871159 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.871393 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.871709 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.871983 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.872235 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.872545 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.872905 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.881112 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.881687 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.881944 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.882205 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.882464 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.882685 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.883167 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.883491 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.883775 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.884000 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.884241 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.897599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898550 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898571 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898585 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898600 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898615 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898629 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898640 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd5ws\" (UniqueName: \"kubernetes.io/projected/688a9d65-4700-47fb-a150-723f9c21b054-kube-api-access-rd5ws\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898652 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898664 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898675 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898687 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898710 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:04.898722 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/688a9d65-4700-47fb-a150-723f9c21b054-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:05.037128 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:02:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:02:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:02:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T07:02:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15adb3b2133604b064893f8009a74145e4c8bb5b134d111346dcccbdd2aa9bc2\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:164fc35a19aa6cc886c8015c8ee3eba4895e76b1152cb9d795e4f3154a8533a3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1610512706},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:544a01170a4aa6cf8322d5bffa5817113efd696e3c3e9bac6a29d2da9f9451e5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:67f42a86b99b69b357285a6845977f967e6c825de2049c19620a78eaf99cebf3\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1222075732},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:3ed403bbcdfb639292182990019eb8534e3bd23e3e34b9dcfc80787fec74b49f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7a807fc4e9e777d39e6248ec7b91e4c24fc8778eeb1bfae25ca162d0302e12bb\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201958734},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1b1026c62413fa239fa4ff6541fe8bda656c1281867ad6ee2c848feccb13c97e\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b633ebdc901d19290af4dc2d09e2b59c504c0fc15a3fba410b0ce098e2d5753\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1141987142},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:05.037548 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:05.037893 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:05.038103 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:05.038342 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:05.038361 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 07:02:07 crc kubenswrapper[4895]: W1206 07:02:07.075617 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-57e10ed615dc5bc88de9f640e6aac91397f32556604b3bd4bd646b107367d99d WatchSource:0}: Error finding container 57e10ed615dc5bc88de9f640e6aac91397f32556604b3bd4bd646b107367d99d: Status 404 returned error can't find the container with id 57e10ed615dc5bc88de9f640e6aac91397f32556604b3bd4bd646b107367d99d Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.884113 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerStarted","Data":"b8751671bb6feb9de3320ae03878d3e3f7f7b6fa1cc14d34bfa326389f41ef0e"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.885375 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.886808 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.887189 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.887642 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.887930 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.888232 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.888443 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.888744 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.889005 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.889194 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.889209 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.889273 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba7c19eaf3c02973e77c7ae40021b69216e74a99588400ec4bd6de4fc8f498dc"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.889417 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.889774 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.890006 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.890254 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.890526 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.890755 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.890975 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.891196 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.891394 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.891439 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerStarted","Data":"54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.891620 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.891865 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.892049 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.892297 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.892578 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.892806 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893027 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893238 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893414 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893609 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerStarted","Data":"5757e029ec92541a12ac864d4586539610ed51835f67a429563e5e4a394de6f5"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893816 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.893996 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.894273 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.894511 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.894799 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.895106 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.895664 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.895928 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.896128 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.896330 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.896409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerStarted","Data":"8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.896545 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.896925 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.897940 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.898191 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.898557 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.898950 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.899292 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.899641 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.899895 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerStarted","Data":"2dcf34f3fe29065065b3fd17dfc5a8bc4ab25c12e16a68dfe8d233c6b8bf2a9a"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.899968 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.900231 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.900526 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.900795 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.901032 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.901275 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.901761 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.902136 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerStarted","Data":"f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.902365 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.902803 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.902999 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.903170 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.903339 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.903509 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.903740 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.903901 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.904070 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.904234 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.904375 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.904561 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.910748 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerStarted","Data":"a81df0973b975bc5f3297f7a671fe073a4d78a3fdd67a8d46426b4a5c8a0ea69"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.911313 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.911517 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.911885 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912067 4895 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8709a85480e1a4a558a98dc98034f0f1a6a830cdc3e080e55dbbb15f72035996" exitCode=0 Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912104 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8709a85480e1a4a558a98dc98034f0f1a6a830cdc3e080e55dbbb15f72035996"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"57e10ed615dc5bc88de9f640e6aac91397f32556604b3bd4bd646b107367d99d"} Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912439 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912493 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912531 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.912803 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.913074 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: E1206 07:02:07.913303 4895 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.913370 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.913639 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.913850 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.914084 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.914335 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.914717 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.914875 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.915040 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.915198 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.915341 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.915511 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.915757 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.915958 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.916113 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.916271 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:07 crc kubenswrapper[4895]: I1206 07:02:07.916443 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.055758 4895 status_manager.go:851] "Failed to get status for pod" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" pod="openshift-marketplace/community-operators-dlp2v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlp2v\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.056413 4895 status_manager.go:851] "Failed to get status for pod" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" pod="openshift-marketplace/redhat-operators-6wtxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6wtxh\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.056577 4895 status_manager.go:851] "Failed to get status for pod" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" pod="openshift-marketplace/certified-operators-zwn6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-zwn6t\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.056728 4895 status_manager.go:851] "Failed to get status for pod" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.056870 4895 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.057008 4895 status_manager.go:851] "Failed to get status for pod" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" pod="openshift-marketplace/certified-operators-z8gpr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z8gpr\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.057147 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.057343 4895 status_manager.go:851] "Failed to get status for pod" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" pod="openshift-marketplace/redhat-operators-9ddnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9ddnd\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.057565 4895 status_manager.go:851] "Failed to get status for pod" podUID="688a9d65-4700-47fb-a150-723f9c21b054" pod="openshift-authentication/oauth-openshift-558db77b4-ggcv2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ggcv2\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.057721 4895 status_manager.go:851] "Failed to get status for pod" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" pod="openshift-marketplace/community-operators-2j9kw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2j9kw\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.057871 4895 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.058007 4895 status_manager.go:851] "Failed to get status for pod" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" pod="openshift-marketplace/redhat-marketplace-xmdtv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xmdtv\": dial tcp 38.129.56.132:6443: connect: connection refused" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.889118 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.893868 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.920534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e5ef85dfb4b556105f8c516dcd51f52a07790c1824c3bb99c4258f8b142254f"} Dec 06 07:02:08 crc kubenswrapper[4895]: I1206 07:02:08.925027 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 07:02:10 crc kubenswrapper[4895]: I1206 07:02:10.934284 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6a1dc74b8cc653d9a873f9b912f0626976bf0a6123f7c1be24c93e7159e09c88"} Dec 06 07:02:12 crc kubenswrapper[4895]: I1206 07:02:12.949741 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8a173ea5c303dfadda94af656ba4296049f57a6e398534323c257ef0f0228c17"} Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.342596 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.343180 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.403415 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.458183 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.458238 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.521735 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.663210 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.663290 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.730650 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 07:02:13 crc kubenswrapper[4895]: I1206 07:02:13.997990 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 07:02:14 crc kubenswrapper[4895]: I1206 07:02:14.001599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 07:02:14 crc kubenswrapper[4895]: I1206 07:02:14.003824 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 07:02:14 crc kubenswrapper[4895]: I1206 07:02:14.035500 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 07:02:14 crc kubenswrapper[4895]: I1206 07:02:14.035572 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 07:02:14 crc kubenswrapper[4895]: I1206 07:02:14.079428 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 07:02:14 crc kubenswrapper[4895]: I1206 07:02:14.968341 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5c2f7c0b7917bc091d1ee7c45a99b29512dbf45ec67151d94fcf6fee32e8999"} Dec 06 07:02:15 crc kubenswrapper[4895]: I1206 07:02:15.403167 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 07:02:15 crc kubenswrapper[4895]: I1206 07:02:15.590653 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 07:02:15 crc kubenswrapper[4895]: I1206 07:02:15.590739 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 07:02:15 crc kubenswrapper[4895]: I1206 07:02:15.647860 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.015294 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.386641 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.386811 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.428256 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.686199 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.686255 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.731528 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.983571 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d86ae1778c595f78fb589db8b648e48f1bdcb4a60b69f61585932e1b18ac7ac7"} Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.984257 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.984292 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.984287 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:02:16 crc kubenswrapper[4895]: I1206 07:02:16.993550 4895 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:02:17 crc kubenswrapper[4895]: I1206 07:02:17.024017 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 07:02:17 crc kubenswrapper[4895]: I1206 07:02:17.028687 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 07:02:17 crc kubenswrapper[4895]: I1206 07:02:17.863537 4895 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 06 07:02:17 crc kubenswrapper[4895]: I1206 07:02:17.989013 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:02:17 crc kubenswrapper[4895]: I1206 07:02:17.989051 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:02:18 crc kubenswrapper[4895]: I1206 07:02:18.136771 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="12305c90-8aca-4a06-813e-de43102cbe56" Dec 06 07:02:24 crc kubenswrapper[4895]: I1206 07:02:24.902340 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 07:02:44 crc kubenswrapper[4895]: I1206 07:02:44.587818 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 07:02:44 crc kubenswrapper[4895]: I1206 07:02:44.802442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 07:02:45 crc kubenswrapper[4895]: I1206 07:02:45.378674 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 07:02:46 crc kubenswrapper[4895]: I1206 07:02:46.652623 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 07:02:47 crc kubenswrapper[4895]: I1206 07:02:47.073334 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 07:02:47 crc kubenswrapper[4895]: I1206 07:02:47.294922 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 07:02:48 crc kubenswrapper[4895]: I1206 07:02:48.161605 4895 generic.go:334] "Generic (PLEG): container finished" podID="4dc1e914-43fd-450e-922c-6462f78105f9" containerID="bc9df5aef41004da062d982e12bc5f1d5872d255d54499027d180eaf7cf067de" exitCode=0 Dec 06 07:02:48 crc kubenswrapper[4895]: I1206 07:02:48.161673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerDied","Data":"bc9df5aef41004da062d982e12bc5f1d5872d255d54499027d180eaf7cf067de"} Dec 06 07:02:48 crc kubenswrapper[4895]: I1206 07:02:48.162369 4895 scope.go:117] "RemoveContainer" containerID="bc9df5aef41004da062d982e12bc5f1d5872d255d54499027d180eaf7cf067de" Dec 06 07:02:48 crc kubenswrapper[4895]: I1206 07:02:48.213715 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 07:02:48 crc kubenswrapper[4895]: I1206 07:02:48.349358 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 07:02:48 crc kubenswrapper[4895]: I1206 07:02:48.514174 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.169090 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bnn9x_4dc1e914-43fd-450e-922c-6462f78105f9/marketplace-operator/1.log" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.170825 4895 generic.go:334] "Generic (PLEG): container finished" podID="4dc1e914-43fd-450e-922c-6462f78105f9" containerID="6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d" exitCode=1 Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.170864 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerDied","Data":"6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d"} Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.170924 4895 scope.go:117] "RemoveContainer" containerID="bc9df5aef41004da062d982e12bc5f1d5872d255d54499027d180eaf7cf067de" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.171596 4895 scope.go:117] "RemoveContainer" containerID="6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d" Dec 06 07:02:49 crc kubenswrapper[4895]: E1206 07:02:49.171857 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-bnn9x_openshift-marketplace(4dc1e914-43fd-450e-922c-6462f78105f9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.494679 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.541457 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.579222 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 07:02:49 crc kubenswrapper[4895]: I1206 07:02:49.809491 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 07:02:50 crc kubenswrapper[4895]: I1206 07:02:50.029086 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 07:02:50 crc kubenswrapper[4895]: I1206 07:02:50.178343 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bnn9x_4dc1e914-43fd-450e-922c-6462f78105f9/marketplace-operator/1.log" Dec 06 07:02:50 crc kubenswrapper[4895]: I1206 07:02:50.438782 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 07:02:50 crc kubenswrapper[4895]: I1206 07:02:50.888171 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 07:02:50 crc kubenswrapper[4895]: I1206 07:02:50.911847 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 07:02:51 crc kubenswrapper[4895]: I1206 07:02:51.646278 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 07:02:51 crc kubenswrapper[4895]: I1206 07:02:51.736320 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 07:02:51 crc kubenswrapper[4895]: I1206 07:02:51.882669 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 07:02:51 crc kubenswrapper[4895]: I1206 07:02:51.980849 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 07:02:52 crc kubenswrapper[4895]: I1206 07:02:52.214621 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 07:02:52 crc kubenswrapper[4895]: I1206 07:02:52.458095 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 07:02:52 crc kubenswrapper[4895]: I1206 07:02:52.902273 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.019903 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.160779 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.210900 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.367700 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.518814 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.743306 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.794733 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 07:02:53 crc kubenswrapper[4895]: I1206 07:02:53.825103 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.095309 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.206458 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.206839 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.207448 4895 scope.go:117] "RemoveContainer" containerID="6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d" Dec 06 07:02:54 crc kubenswrapper[4895]: E1206 07:02:54.207781 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-bnn9x_openshift-marketplace(4dc1e914-43fd-450e-922c-6462f78105f9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.377855 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.488175 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.717023 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 07:02:54 crc kubenswrapper[4895]: I1206 07:02:54.790341 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.004578 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.215256 4895 scope.go:117] "RemoveContainer" containerID="6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d" Dec 06 07:02:55 crc kubenswrapper[4895]: E1206 07:02:55.215715 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-bnn9x_openshift-marketplace(4dc1e914-43fd-450e-922c-6462f78105f9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.247778 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.261268 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.364155 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.545397 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.814730 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.902326 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.908769 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 07:02:55 crc kubenswrapper[4895]: I1206 07:02:55.993986 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.161504 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.225582 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.249066 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.324683 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.362005 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.735954 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.778342 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.793944 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.861771 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.887798 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.906639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.915775 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.915907 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 07:02:56 crc kubenswrapper[4895]: I1206 07:02:56.922366 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.031647 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.040853 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.126007 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.276573 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.333181 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.418321 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.587625 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.596538 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.663714 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.742091 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.774328 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.791742 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.824830 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.829850 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 07:02:57 crc kubenswrapper[4895]: I1206 07:02:57.892994 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 07:02:58 crc kubenswrapper[4895]: I1206 07:02:58.258657 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 07:02:58 crc kubenswrapper[4895]: I1206 07:02:58.280699 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 07:02:58 crc kubenswrapper[4895]: I1206 07:02:58.565922 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 07:02:58 crc kubenswrapper[4895]: I1206 07:02:58.806225 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 07:02:58 crc kubenswrapper[4895]: I1206 07:02:58.974327 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.139109 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.354333 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.425949 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.461661 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.888749 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.922330 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 07:02:59 crc kubenswrapper[4895]: I1206 07:02:59.928650 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.125679 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.154296 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.291275 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.343892 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.435898 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.604691 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.644688 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.710656 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.716754 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.825327 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.894466 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.905849 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.974892 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 07:03:00 crc kubenswrapper[4895]: I1206 07:03:00.982097 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.444039 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.488680 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.499431 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.534966 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.548308 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.683552 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.699157 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.704760 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.758293 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.775633 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.777586 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.834025 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.868700 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.984423 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.985652 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 07:03:01 crc kubenswrapper[4895]: I1206 07:03:01.987264 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.155321 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.333286 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.339781 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.352953 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.361496 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.461046 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.580539 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.601419 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.767773 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 07:03:02 crc kubenswrapper[4895]: I1206 07:03:02.810243 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.034040 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.096106 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.191815 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.193451 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.223819 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.467184 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 07:03:03 crc kubenswrapper[4895]: I1206 07:03:03.783936 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 07:03:04 crc kubenswrapper[4895]: I1206 07:03:04.004385 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 07:03:04 crc kubenswrapper[4895]: I1206 07:03:04.088327 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 07:03:04 crc kubenswrapper[4895]: I1206 07:03:04.350263 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 07:03:04 crc kubenswrapper[4895]: I1206 07:03:04.476371 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 07:03:04 crc kubenswrapper[4895]: I1206 07:03:04.599811 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 07:03:04 crc kubenswrapper[4895]: I1206 07:03:04.771045 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.072221 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.106944 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.139222 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.155952 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.308044 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.435523 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.569754 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.693743 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.761288 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.771312 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.902754 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 07:03:05 crc kubenswrapper[4895]: I1206 07:03:05.996581 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.016816 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.025764 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.086505 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.248621 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.501360 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.659803 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 07:03:06 crc kubenswrapper[4895]: I1206 07:03:06.961685 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.457641 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.506132 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.559910 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.577678 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.644529 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.730962 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.754218 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.769829 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.892442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 07:03:07 crc kubenswrapper[4895]: I1206 07:03:07.980228 4895 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.069290 4895 scope.go:117] "RemoveContainer" containerID="6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.263615 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.393708 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.706454 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.721422 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.794198 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.820015 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.861889 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 07:03:08 crc kubenswrapper[4895]: I1206 07:03:08.939146 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.026042 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.259589 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.294331 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bnn9x_4dc1e914-43fd-450e-922c-6462f78105f9/marketplace-operator/1.log" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.294411 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerStarted","Data":"57d15672bc7eaaaf1c9b52978882ff6684430b84d0958c2d13d10b273ad641ac"} Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.294865 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.297520 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.348904 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.435689 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.550882 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.587372 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.595918 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.740233 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 07:03:09 crc kubenswrapper[4895]: I1206 07:03:09.751435 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.040824 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.071708 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.289762 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.394653 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.734732 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.776986 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 07:03:10 crc kubenswrapper[4895]: I1206 07:03:10.813316 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.088831 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.176764 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.454979 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.719648 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.786055 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.872766 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 07:03:11 crc kubenswrapper[4895]: I1206 07:03:11.978020 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 07:03:12 crc kubenswrapper[4895]: I1206 07:03:12.117106 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 07:03:12 crc kubenswrapper[4895]: I1206 07:03:12.493381 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 07:03:12 crc kubenswrapper[4895]: I1206 07:03:12.776217 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 07:03:12 crc kubenswrapper[4895]: I1206 07:03:12.862270 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 07:03:12 crc kubenswrapper[4895]: I1206 07:03:12.884095 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.228323 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.362974 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.795833 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.847933 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.939872 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.992365 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 07:03:13 crc kubenswrapper[4895]: I1206 07:03:13.992895 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.288678 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.430697 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.533380 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.550492 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.707571 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.752634 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.804316 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.808341 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.886584 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 07:03:14 crc kubenswrapper[4895]: I1206 07:03:14.891591 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 07:03:15 crc kubenswrapper[4895]: I1206 07:03:15.221101 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 07:03:15 crc kubenswrapper[4895]: I1206 07:03:15.231976 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 07:03:15 crc kubenswrapper[4895]: I1206 07:03:15.724606 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 07:03:15 crc kubenswrapper[4895]: I1206 07:03:15.817935 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 07:03:16 crc kubenswrapper[4895]: I1206 07:03:16.360124 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 07:03:16 crc kubenswrapper[4895]: I1206 07:03:16.849055 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.058798 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.551645 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.667985 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.785218 4895 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.785531 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zwn6t" podStartSLOduration=76.969960202 podStartE2EDuration="3m25.785517857s" podCreationTimestamp="2025-12-06 06:59:52 +0000 UTC" firstStartedPulling="2025-12-06 06:59:58.255168763 +0000 UTC m=+160.656557633" lastFinishedPulling="2025-12-06 07:02:07.070726408 +0000 UTC m=+289.472115288" observedRunningTime="2025-12-06 07:02:18.353894521 +0000 UTC m=+300.755283391" watchObservedRunningTime="2025-12-06 07:03:17.785517857 +0000 UTC m=+360.186906737" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.786113 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=93.786108073 podStartE2EDuration="1m33.786108073s" podCreationTimestamp="2025-12-06 07:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:02:18.255508688 +0000 UTC m=+300.656897558" watchObservedRunningTime="2025-12-06 07:03:17.786108073 +0000 UTC m=+360.187496943" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.786662 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6wtxh" podStartSLOduration=73.718350679 podStartE2EDuration="3m21.786655547s" podCreationTimestamp="2025-12-06 06:59:56 +0000 UTC" firstStartedPulling="2025-12-06 06:59:59.288252898 +0000 UTC m=+161.689641768" lastFinishedPulling="2025-12-06 07:02:07.356557766 +0000 UTC m=+289.757946636" observedRunningTime="2025-12-06 07:02:18.331284716 +0000 UTC m=+300.732673596" watchObservedRunningTime="2025-12-06 07:03:17.786655547 +0000 UTC m=+360.188044417" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.786953 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ddnd" podStartSLOduration=74.775098732 podStartE2EDuration="3m22.786947426s" podCreationTimestamp="2025-12-06 06:59:55 +0000 UTC" firstStartedPulling="2025-12-06 06:59:59.298357758 +0000 UTC m=+161.699746628" lastFinishedPulling="2025-12-06 07:02:07.310206452 +0000 UTC m=+289.711595322" observedRunningTime="2025-12-06 07:02:18.202943931 +0000 UTC m=+300.604332801" watchObservedRunningTime="2025-12-06 07:03:17.786947426 +0000 UTC m=+360.188336296" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.787554 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2j9kw" podStartSLOduration=76.803307624 podStartE2EDuration="3m25.787548762s" podCreationTimestamp="2025-12-06 06:59:52 +0000 UTC" firstStartedPulling="2025-12-06 06:59:58.215373661 +0000 UTC m=+160.616762531" lastFinishedPulling="2025-12-06 07:02:07.199614809 +0000 UTC m=+289.601003669" observedRunningTime="2025-12-06 07:02:18.245366049 +0000 UTC m=+300.646754919" watchObservedRunningTime="2025-12-06 07:03:17.787548762 +0000 UTC m=+360.188937652" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.787793 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlp2v" podStartSLOduration=75.866452209 podStartE2EDuration="3m24.787788138s" podCreationTimestamp="2025-12-06 06:59:53 +0000 UTC" firstStartedPulling="2025-12-06 06:59:58.230466304 +0000 UTC m=+160.631855174" lastFinishedPulling="2025-12-06 07:02:07.151802233 +0000 UTC m=+289.553191103" observedRunningTime="2025-12-06 07:02:18.297135871 +0000 UTC m=+300.698524741" watchObservedRunningTime="2025-12-06 07:03:17.787788138 +0000 UTC m=+360.189177008" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.787865 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xmdtv" podStartSLOduration=73.74813044 podStartE2EDuration="3m22.78786228s" podCreationTimestamp="2025-12-06 06:59:55 +0000 UTC" firstStartedPulling="2025-12-06 06:59:58.216405212 +0000 UTC m=+160.617794082" lastFinishedPulling="2025-12-06 07:02:07.256137052 +0000 UTC m=+289.657525922" observedRunningTime="2025-12-06 07:02:18.270327193 +0000 UTC m=+300.671716063" watchObservedRunningTime="2025-12-06 07:03:17.78786228 +0000 UTC m=+360.189251150" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.788007 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8gpr" podStartSLOduration=75.979047008 podStartE2EDuration="3m24.788004914s" podCreationTimestamp="2025-12-06 06:59:53 +0000 UTC" firstStartedPulling="2025-12-06 06:59:58.236950613 +0000 UTC m=+160.638339483" lastFinishedPulling="2025-12-06 07:02:07.045908519 +0000 UTC m=+289.447297389" observedRunningTime="2025-12-06 07:02:18.165210171 +0000 UTC m=+300.566599051" watchObservedRunningTime="2025-12-06 07:03:17.788004914 +0000 UTC m=+360.189393784" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.789724 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ggcv2","openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.789769 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-sqjb2","openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 07:03:17 crc kubenswrapper[4895]: E1206 07:03:17.789943 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688a9d65-4700-47fb-a150-723f9c21b054" containerName="oauth-openshift" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.789957 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="688a9d65-4700-47fb-a150-723f9c21b054" containerName="oauth-openshift" Dec 06 07:03:17 crc kubenswrapper[4895]: E1206 07:03:17.789967 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" containerName="installer" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.789974 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" containerName="installer" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.790186 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.790211 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d0ed585-fa5f-4661-a7fd-69084df17bd9" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.790200 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="688a9d65-4700-47fb-a150-723f9c21b054" containerName="oauth-openshift" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.790305 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d916d0cd-06ea-488b-a8aa-cd43f0069b13" containerName="installer" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.790761 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.792975 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.793164 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.793813 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.794135 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.794258 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.794696 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.794893 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.795022 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.796478 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.796919 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.796939 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.797013 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.797091 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.801188 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.808010 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.815359 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.835692 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=61.835670078 podStartE2EDuration="1m1.835670078s" podCreationTimestamp="2025-12-06 07:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:03:17.830655063 +0000 UTC m=+360.232043963" watchObservedRunningTime="2025-12-06 07:03:17.835670078 +0000 UTC m=+360.237058948" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844089 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844137 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844156 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844188 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-audit-policies\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844207 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844222 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844273 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844289 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f849dec-995f-471f-b2b8-0a05bbfb8d14-audit-dir\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844341 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844358 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7lvf\" (UniqueName: \"kubernetes.io/projected/4f849dec-995f-471f-b2b8-0a05bbfb8d14-kube-api-access-p7lvf\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.844373 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945343 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945395 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945447 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945533 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f849dec-995f-471f-b2b8-0a05bbfb8d14-audit-dir\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945560 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945585 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7lvf\" (UniqueName: \"kubernetes.io/projected/4f849dec-995f-471f-b2b8-0a05bbfb8d14-kube-api-access-p7lvf\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945607 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945640 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f849dec-995f-471f-b2b8-0a05bbfb8d14-audit-dir\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.945992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.946139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.946234 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-audit-policies\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.946294 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.946316 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.946364 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.947351 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.947961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.947999 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-audit-policies\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.948578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.954260 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.954518 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.954728 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.954774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.954803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.959056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.959119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.959360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f849dec-995f-471f-b2b8-0a05bbfb8d14-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:17 crc kubenswrapper[4895]: I1206 07:03:17.963723 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7lvf\" (UniqueName: \"kubernetes.io/projected/4f849dec-995f-471f-b2b8-0a05bbfb8d14-kube-api-access-p7lvf\") pod \"oauth-openshift-649d76d5b4-sqjb2\" (UID: \"4f849dec-995f-471f-b2b8-0a05bbfb8d14\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:18 crc kubenswrapper[4895]: I1206 07:03:18.058222 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688a9d65-4700-47fb-a150-723f9c21b054" path="/var/lib/kubelet/pods/688a9d65-4700-47fb-a150-723f9c21b054/volumes" Dec 06 07:03:18 crc kubenswrapper[4895]: I1206 07:03:18.118828 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 07:03:18 crc kubenswrapper[4895]: I1206 07:03:18.127086 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:18 crc kubenswrapper[4895]: I1206 07:03:18.207770 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 07:03:18 crc kubenswrapper[4895]: I1206 07:03:18.498189 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 07:03:18 crc kubenswrapper[4895]: I1206 07:03:18.526830 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-sqjb2"] Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.070743 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.070814 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.074983 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.198116 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.352098 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" event={"ID":"4f849dec-995f-471f-b2b8-0a05bbfb8d14","Type":"ContainerStarted","Data":"a790372a0af3e9ea0d3857fe45baa0cae1daea9f3b26013a28c71d5bb58eca9b"} Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.352141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" event={"ID":"4f849dec-995f-471f-b2b8-0a05bbfb8d14","Type":"ContainerStarted","Data":"b622932181c2f1f94a46016186c15815cfe52486dad001f136822a51f059ebd8"} Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.352636 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.356705 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.358074 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.371917 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-649d76d5b4-sqjb2" podStartSLOduration=105.371900278 podStartE2EDuration="1m45.371900278s" podCreationTimestamp="2025-12-06 07:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:03:19.371220219 +0000 UTC m=+361.772609109" watchObservedRunningTime="2025-12-06 07:03:19.371900278 +0000 UTC m=+361.773289148" Dec 06 07:03:19 crc kubenswrapper[4895]: I1206 07:03:19.388841 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 07:03:23 crc kubenswrapper[4895]: I1206 07:03:23.936044 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 07:03:24 crc kubenswrapper[4895]: I1206 07:03:24.910716 4895 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 07:03:24 crc kubenswrapper[4895]: I1206 07:03:24.911135 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://613e533fa9a0136c7c9aa8069af19fcdc8e37bfc91e3b7fb203dbe0f4b23941f" gracePeriod=5 Dec 06 07:03:29 crc kubenswrapper[4895]: I1206 07:03:29.696104 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:03:29 crc kubenswrapper[4895]: I1206 07:03:29.696614 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.417362 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.417659 4895 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="613e533fa9a0136c7c9aa8069af19fcdc8e37bfc91e3b7fb203dbe0f4b23941f" exitCode=137 Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.485027 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.485103 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612554 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612691 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612721 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612739 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612881 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612915 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612971 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.612948 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.613027 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.613799 4895 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.613827 4895 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.613837 4895 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.613848 4895 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.623273 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:03:30 crc kubenswrapper[4895]: I1206 07:03:30.715142 4895 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:31 crc kubenswrapper[4895]: I1206 07:03:31.426319 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 07:03:31 crc kubenswrapper[4895]: I1206 07:03:31.426422 4895 scope.go:117] "RemoveContainer" containerID="613e533fa9a0136c7c9aa8069af19fcdc8e37bfc91e3b7fb203dbe0f4b23941f" Dec 06 07:03:31 crc kubenswrapper[4895]: I1206 07:03:31.426911 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 07:03:32 crc kubenswrapper[4895]: I1206 07:03:32.058393 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 06 07:03:32 crc kubenswrapper[4895]: I1206 07:03:32.059056 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 06 07:03:32 crc kubenswrapper[4895]: I1206 07:03:32.074182 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 07:03:32 crc kubenswrapper[4895]: I1206 07:03:32.074254 4895 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="715ab71e-98f0-4707-a296-6b0e90fd9033" Dec 06 07:03:32 crc kubenswrapper[4895]: I1206 07:03:32.078465 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 07:03:32 crc kubenswrapper[4895]: I1206 07:03:32.078573 4895 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="715ab71e-98f0-4707-a296-6b0e90fd9033" Dec 06 07:03:44 crc kubenswrapper[4895]: I1206 07:03:44.610698 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qfxpm"] Dec 06 07:03:44 crc kubenswrapper[4895]: I1206 07:03:44.611514 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" podUID="fbde9406-9da6-43ea-b1e7-b8638e8d0351" containerName="controller-manager" containerID="cri-o://ec6cdaa0d40d2b32b759a3bdc57aa3fb3e9195a072803a4cc914e3ea5d62df4f" gracePeriod=30 Dec 06 07:03:44 crc kubenswrapper[4895]: I1206 07:03:44.725795 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b"] Dec 06 07:03:44 crc kubenswrapper[4895]: I1206 07:03:44.725997 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerName="route-controller-manager" containerID="cri-o://acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c" gracePeriod=30 Dec 06 07:03:44 crc kubenswrapper[4895]: I1206 07:03:44.923000 4895 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rhs4b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 06 07:03:44 crc kubenswrapper[4895]: I1206 07:03:44.923571 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.074665 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.242316 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-client-ca\") pod \"16ca85ad-de88-4503-b27c-cfeaa96ae436\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.242628 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-config\") pod \"16ca85ad-de88-4503-b27c-cfeaa96ae436\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.242750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/16ca85ad-de88-4503-b27c-cfeaa96ae436-kube-api-access-tvqv5\") pod \"16ca85ad-de88-4503-b27c-cfeaa96ae436\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.242830 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ca85ad-de88-4503-b27c-cfeaa96ae436-serving-cert\") pod \"16ca85ad-de88-4503-b27c-cfeaa96ae436\" (UID: \"16ca85ad-de88-4503-b27c-cfeaa96ae436\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.243587 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-config" (OuterVolumeSpecName: "config") pod "16ca85ad-de88-4503-b27c-cfeaa96ae436" (UID: "16ca85ad-de88-4503-b27c-cfeaa96ae436"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.243705 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-client-ca" (OuterVolumeSpecName: "client-ca") pod "16ca85ad-de88-4503-b27c-cfeaa96ae436" (UID: "16ca85ad-de88-4503-b27c-cfeaa96ae436"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.243951 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.243965 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ca85ad-de88-4503-b27c-cfeaa96ae436-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.248936 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ca85ad-de88-4503-b27c-cfeaa96ae436-kube-api-access-tvqv5" (OuterVolumeSpecName: "kube-api-access-tvqv5") pod "16ca85ad-de88-4503-b27c-cfeaa96ae436" (UID: "16ca85ad-de88-4503-b27c-cfeaa96ae436"). InnerVolumeSpecName "kube-api-access-tvqv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.251898 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ca85ad-de88-4503-b27c-cfeaa96ae436-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16ca85ad-de88-4503-b27c-cfeaa96ae436" (UID: "16ca85ad-de88-4503-b27c-cfeaa96ae436"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.345321 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/16ca85ad-de88-4503-b27c-cfeaa96ae436-kube-api-access-tvqv5\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.345363 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ca85ad-de88-4503-b27c-cfeaa96ae436-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.505350 4895 generic.go:334] "Generic (PLEG): container finished" podID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerID="acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c" exitCode=0 Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.505399 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" event={"ID":"16ca85ad-de88-4503-b27c-cfeaa96ae436","Type":"ContainerDied","Data":"acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c"} Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.505424 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.505447 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b" event={"ID":"16ca85ad-de88-4503-b27c-cfeaa96ae436","Type":"ContainerDied","Data":"a1abc63851e965b11004ea27a8a78eb004c31499444a2fdfc2a847f277047ee6"} Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.505491 4895 scope.go:117] "RemoveContainer" containerID="acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.508016 4895 generic.go:334] "Generic (PLEG): container finished" podID="fbde9406-9da6-43ea-b1e7-b8638e8d0351" containerID="ec6cdaa0d40d2b32b759a3bdc57aa3fb3e9195a072803a4cc914e3ea5d62df4f" exitCode=0 Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.508042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" event={"ID":"fbde9406-9da6-43ea-b1e7-b8638e8d0351","Type":"ContainerDied","Data":"ec6cdaa0d40d2b32b759a3bdc57aa3fb3e9195a072803a4cc914e3ea5d62df4f"} Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.524693 4895 scope.go:117] "RemoveContainer" containerID="acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c" Dec 06 07:03:45 crc kubenswrapper[4895]: E1206 07:03:45.525528 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c\": container with ID starting with acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c not found: ID does not exist" containerID="acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.525597 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c"} err="failed to get container status \"acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c\": rpc error: code = NotFound desc = could not find container \"acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c\": container with ID starting with acd9e960d5c9ca1ec8734781cae72578b676a6e295ffb88df297405e7ca40c5c not found: ID does not exist" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.540693 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b"] Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.545096 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rhs4b"] Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.668288 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.748693 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-config\") pod \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.748731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde9406-9da6-43ea-b1e7-b8638e8d0351-serving-cert\") pod \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.748763 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-client-ca\") pod \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.748784 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-proxy-ca-bundles\") pod \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.748825 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdt95\" (UniqueName: \"kubernetes.io/projected/fbde9406-9da6-43ea-b1e7-b8638e8d0351-kube-api-access-jdt95\") pod \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\" (UID: \"fbde9406-9da6-43ea-b1e7-b8638e8d0351\") " Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.750246 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-config" (OuterVolumeSpecName: "config") pod "fbde9406-9da6-43ea-b1e7-b8638e8d0351" (UID: "fbde9406-9da6-43ea-b1e7-b8638e8d0351"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.750298 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-client-ca" (OuterVolumeSpecName: "client-ca") pod "fbde9406-9da6-43ea-b1e7-b8638e8d0351" (UID: "fbde9406-9da6-43ea-b1e7-b8638e8d0351"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.750414 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fbde9406-9da6-43ea-b1e7-b8638e8d0351" (UID: "fbde9406-9da6-43ea-b1e7-b8638e8d0351"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.753720 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbde9406-9da6-43ea-b1e7-b8638e8d0351-kube-api-access-jdt95" (OuterVolumeSpecName: "kube-api-access-jdt95") pod "fbde9406-9da6-43ea-b1e7-b8638e8d0351" (UID: "fbde9406-9da6-43ea-b1e7-b8638e8d0351"). InnerVolumeSpecName "kube-api-access-jdt95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.753754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbde9406-9da6-43ea-b1e7-b8638e8d0351-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fbde9406-9da6-43ea-b1e7-b8638e8d0351" (UID: "fbde9406-9da6-43ea-b1e7-b8638e8d0351"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.849666 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.849733 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde9406-9da6-43ea-b1e7-b8638e8d0351-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.849743 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.849750 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbde9406-9da6-43ea-b1e7-b8638e8d0351-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:45 crc kubenswrapper[4895]: I1206 07:03:45.849760 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdt95\" (UniqueName: \"kubernetes.io/projected/fbde9406-9da6-43ea-b1e7-b8638e8d0351-kube-api-access-jdt95\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.068048 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" path="/var/lib/kubelet/pods/16ca85ad-de88-4503-b27c-cfeaa96ae436/volumes" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.517993 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" event={"ID":"fbde9406-9da6-43ea-b1e7-b8638e8d0351","Type":"ContainerDied","Data":"715633f35e9d93dacaebadc0d0bd53a9361204159d17de2a479c78bd7153b608"} Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.518058 4895 scope.go:117] "RemoveContainer" containerID="ec6cdaa0d40d2b32b759a3bdc57aa3fb3e9195a072803a4cc914e3ea5d62df4f" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.518060 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qfxpm" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.537362 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qfxpm"] Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.542013 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qfxpm"] Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.576844 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7"] Dec 06 07:03:46 crc kubenswrapper[4895]: E1206 07:03:46.577527 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerName="route-controller-manager" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.577544 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerName="route-controller-manager" Dec 06 07:03:46 crc kubenswrapper[4895]: E1206 07:03:46.577562 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.577570 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 07:03:46 crc kubenswrapper[4895]: E1206 07:03:46.577580 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbde9406-9da6-43ea-b1e7-b8638e8d0351" containerName="controller-manager" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.577587 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbde9406-9da6-43ea-b1e7-b8638e8d0351" containerName="controller-manager" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.577725 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ca85ad-de88-4503-b27c-cfeaa96ae436" containerName="route-controller-manager" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.577745 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.577756 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbde9406-9da6-43ea-b1e7-b8638e8d0351" containerName="controller-manager" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.578632 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.580413 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.580646 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.580756 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f65d6556f-bw55w"] Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.581422 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.581464 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.581485 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.581516 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.582808 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.583167 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.583358 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.583557 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.585859 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.586161 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.590120 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.593651 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7"] Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.595514 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.600364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f65d6556f-bw55w"] Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759081 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52722\" (UniqueName: \"kubernetes.io/projected/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-kube-api-access-52722\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-config\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-config\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-client-ca\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759270 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-proxy-ca-bundles\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-client-ca\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759337 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10990418-f2b0-492f-9d0e-edf4868d6575-serving-cert\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759370 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv97l\" (UniqueName: \"kubernetes.io/projected/10990418-f2b0-492f-9d0e-edf4868d6575-kube-api-access-qv97l\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.759398 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-serving-cert\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860084 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv97l\" (UniqueName: \"kubernetes.io/projected/10990418-f2b0-492f-9d0e-edf4868d6575-kube-api-access-qv97l\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-serving-cert\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860176 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52722\" (UniqueName: \"kubernetes.io/projected/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-kube-api-access-52722\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860231 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-config\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860253 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-config\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-client-ca\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860329 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-proxy-ca-bundles\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-client-ca\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.860397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10990418-f2b0-492f-9d0e-edf4868d6575-serving-cert\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.861740 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-proxy-ca-bundles\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.861776 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-client-ca\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.861839 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-config\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.862310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-client-ca\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.863246 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10990418-f2b0-492f-9d0e-edf4868d6575-config\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.867441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-serving-cert\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.867468 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10990418-f2b0-492f-9d0e-edf4868d6575-serving-cert\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.905833 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52722\" (UniqueName: \"kubernetes.io/projected/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-kube-api-access-52722\") pod \"route-controller-manager-69d8b4dd4d-5ssb7\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.908276 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.909952 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv97l\" (UniqueName: \"kubernetes.io/projected/10990418-f2b0-492f-9d0e-edf4868d6575-kube-api-access-qv97l\") pod \"controller-manager-6f65d6556f-bw55w\" (UID: \"10990418-f2b0-492f-9d0e-edf4868d6575\") " pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:46 crc kubenswrapper[4895]: I1206 07:03:46.919962 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.151452 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7"] Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.210256 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f65d6556f-bw55w"] Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.523980 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" event={"ID":"10990418-f2b0-492f-9d0e-edf4868d6575","Type":"ContainerStarted","Data":"81488e8a2ee9226b953da972b4b1d049740581c841f3f32d54685276f2c02201"} Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.524030 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" event={"ID":"10990418-f2b0-492f-9d0e-edf4868d6575","Type":"ContainerStarted","Data":"488b82f50b12f3b172dac1682855e1701a0248776474af66fb6c46f45c9bacda"} Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.524375 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.525658 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" event={"ID":"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31","Type":"ContainerStarted","Data":"d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71"} Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.526042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" event={"ID":"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31","Type":"ContainerStarted","Data":"9cdd8e75abb6d4022a863f23d136597973585213666faf86b090a50663368162"} Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.526141 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.529873 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.544271 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f65d6556f-bw55w" podStartSLOduration=3.544252541 podStartE2EDuration="3.544252541s" podCreationTimestamp="2025-12-06 07:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:03:47.541986959 +0000 UTC m=+389.943375839" watchObservedRunningTime="2025-12-06 07:03:47.544252541 +0000 UTC m=+389.945641411" Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.606826 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" podStartSLOduration=3.60680703 podStartE2EDuration="3.60680703s" podCreationTimestamp="2025-12-06 07:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:03:47.603961982 +0000 UTC m=+390.005350852" watchObservedRunningTime="2025-12-06 07:03:47.60680703 +0000 UTC m=+390.008195900" Dec 06 07:03:47 crc kubenswrapper[4895]: I1206 07:03:47.726188 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:48 crc kubenswrapper[4895]: I1206 07:03:48.058638 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbde9406-9da6-43ea-b1e7-b8638e8d0351" path="/var/lib/kubelet/pods/fbde9406-9da6-43ea-b1e7-b8638e8d0351/volumes" Dec 06 07:03:48 crc kubenswrapper[4895]: I1206 07:03:48.891113 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8c4wc"] Dec 06 07:03:48 crc kubenswrapper[4895]: I1206 07:03:48.891901 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:48 crc kubenswrapper[4895]: I1206 07:03:48.916688 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8c4wc"] Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092697 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1030b128-626f-4dc2-ac0b-eb9388670067-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1030b128-626f-4dc2-ac0b-eb9388670067-trusted-ca\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092798 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-bound-sa-token\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092876 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1030b128-626f-4dc2-ac0b-eb9388670067-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092922 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-registry-tls\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092965 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1030b128-626f-4dc2-ac0b-eb9388670067-registry-certificates\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.092991 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fptc\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-kube-api-access-7fptc\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.093036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.115611 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194727 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1030b128-626f-4dc2-ac0b-eb9388670067-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1030b128-626f-4dc2-ac0b-eb9388670067-trusted-ca\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-bound-sa-token\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1030b128-626f-4dc2-ac0b-eb9388670067-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194867 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-registry-tls\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194906 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1030b128-626f-4dc2-ac0b-eb9388670067-registry-certificates\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.194932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fptc\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-kube-api-access-7fptc\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.196770 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1030b128-626f-4dc2-ac0b-eb9388670067-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.197262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1030b128-626f-4dc2-ac0b-eb9388670067-registry-certificates\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.197519 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1030b128-626f-4dc2-ac0b-eb9388670067-trusted-ca\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.203987 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1030b128-626f-4dc2-ac0b-eb9388670067-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.204542 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-registry-tls\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.211885 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fptc\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-kube-api-access-7fptc\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.216138 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1030b128-626f-4dc2-ac0b-eb9388670067-bound-sa-token\") pod \"image-registry-66df7c8f76-8c4wc\" (UID: \"1030b128-626f-4dc2-ac0b-eb9388670067\") " pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.513813 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:49 crc kubenswrapper[4895]: I1206 07:03:49.925147 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8c4wc"] Dec 06 07:03:49 crc kubenswrapper[4895]: W1206 07:03:49.930741 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1030b128_626f_4dc2_ac0b_eb9388670067.slice/crio-585991f6972f2dcebae7120521ebdc172fe6aad4dab9cf49044bfbc56c353c7a WatchSource:0}: Error finding container 585991f6972f2dcebae7120521ebdc172fe6aad4dab9cf49044bfbc56c353c7a: Status 404 returned error can't find the container with id 585991f6972f2dcebae7120521ebdc172fe6aad4dab9cf49044bfbc56c353c7a Dec 06 07:03:50 crc kubenswrapper[4895]: I1206 07:03:50.443960 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7"] Dec 06 07:03:50 crc kubenswrapper[4895]: I1206 07:03:50.547004 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" event={"ID":"1030b128-626f-4dc2-ac0b-eb9388670067","Type":"ContainerStarted","Data":"851c95216e20928705b0ffdfa6f8c051b9da8cd5f3a15f110848624cb9d34132"} Dec 06 07:03:50 crc kubenswrapper[4895]: I1206 07:03:50.547050 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" event={"ID":"1030b128-626f-4dc2-ac0b-eb9388670067","Type":"ContainerStarted","Data":"585991f6972f2dcebae7120521ebdc172fe6aad4dab9cf49044bfbc56c353c7a"} Dec 06 07:03:50 crc kubenswrapper[4895]: I1206 07:03:50.547084 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" podUID="29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" containerName="route-controller-manager" containerID="cri-o://d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71" gracePeriod=30 Dec 06 07:03:50 crc kubenswrapper[4895]: I1206 07:03:50.547372 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:03:50 crc kubenswrapper[4895]: I1206 07:03:50.571030 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" podStartSLOduration=2.571014048 podStartE2EDuration="2.571014048s" podCreationTimestamp="2025-12-06 07:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:03:50.569405825 +0000 UTC m=+392.970794705" watchObservedRunningTime="2025-12-06 07:03:50.571014048 +0000 UTC m=+392.972402918" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.466652 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.503243 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp"] Dec 06 07:03:51 crc kubenswrapper[4895]: E1206 07:03:51.504952 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" containerName="route-controller-manager" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.504977 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" containerName="route-controller-manager" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.505144 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" containerName="route-controller-manager" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.505643 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.511801 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp"] Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.538530 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-config\") pod \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.539104 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52722\" (UniqueName: \"kubernetes.io/projected/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-kube-api-access-52722\") pod \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.539349 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-serving-cert\") pod \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.539495 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-client-ca\") pod \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\" (UID: \"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31\") " Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.539637 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-config" (OuterVolumeSpecName: "config") pod "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" (UID: "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-client-ca" (OuterVolumeSpecName: "client-ca") pod "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" (UID: "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5802056-e992-49ec-aba2-728af99f18b6-serving-cert\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540430 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgxg\" (UniqueName: \"kubernetes.io/projected/b5802056-e992-49ec-aba2-728af99f18b6-kube-api-access-6vgxg\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5802056-e992-49ec-aba2-728af99f18b6-client-ca\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540748 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5802056-e992-49ec-aba2-728af99f18b6-config\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540883 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.540988 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.545436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" (UID: "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.545653 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-kube-api-access-52722" (OuterVolumeSpecName: "kube-api-access-52722") pod "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" (UID: "29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31"). InnerVolumeSpecName "kube-api-access-52722". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.558835 4895 generic.go:334] "Generic (PLEG): container finished" podID="29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" containerID="d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71" exitCode=0 Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.559167 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.559664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" event={"ID":"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31","Type":"ContainerDied","Data":"d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71"} Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.559702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7" event={"ID":"29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31","Type":"ContainerDied","Data":"9cdd8e75abb6d4022a863f23d136597973585213666faf86b090a50663368162"} Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.559722 4895 scope.go:117] "RemoveContainer" containerID="d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.597085 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7"] Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.598373 4895 scope.go:117] "RemoveContainer" containerID="d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71" Dec 06 07:03:51 crc kubenswrapper[4895]: E1206 07:03:51.598888 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71\": container with ID starting with d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71 not found: ID does not exist" containerID="d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.599078 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71"} err="failed to get container status \"d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71\": rpc error: code = NotFound desc = could not find container \"d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71\": container with ID starting with d2a550a7d84137dedd2beefbba8da4d7d08571ccdbef7f1f52fc36cb880e2f71 not found: ID does not exist" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.601674 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-5ssb7"] Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.642073 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5802056-e992-49ec-aba2-728af99f18b6-serving-cert\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.642148 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgxg\" (UniqueName: \"kubernetes.io/projected/b5802056-e992-49ec-aba2-728af99f18b6-kube-api-access-6vgxg\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.642180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5802056-e992-49ec-aba2-728af99f18b6-client-ca\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.642369 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5802056-e992-49ec-aba2-728af99f18b6-config\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.643202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5802056-e992-49ec-aba2-728af99f18b6-client-ca\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.644056 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52722\" (UniqueName: \"kubernetes.io/projected/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-kube-api-access-52722\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.644412 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.645961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5802056-e992-49ec-aba2-728af99f18b6-config\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.647450 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5802056-e992-49ec-aba2-728af99f18b6-serving-cert\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.659373 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgxg\" (UniqueName: \"kubernetes.io/projected/b5802056-e992-49ec-aba2-728af99f18b6-kube-api-access-6vgxg\") pod \"route-controller-manager-758c4dfd64-2rbgp\" (UID: \"b5802056-e992-49ec-aba2-728af99f18b6\") " pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:51 crc kubenswrapper[4895]: I1206 07:03:51.824231 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:52 crc kubenswrapper[4895]: I1206 07:03:52.057767 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31" path="/var/lib/kubelet/pods/29b5f3ca-a4dd-459f-8bfd-e2cc743b1a31/volumes" Dec 06 07:03:52 crc kubenswrapper[4895]: I1206 07:03:52.224426 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp"] Dec 06 07:03:52 crc kubenswrapper[4895]: W1206 07:03:52.228365 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5802056_e992_49ec_aba2_728af99f18b6.slice/crio-508913ada0e77f2de6d1431c98dc78f64c5f024098f806d31c462ae5aad67548 WatchSource:0}: Error finding container 508913ada0e77f2de6d1431c98dc78f64c5f024098f806d31c462ae5aad67548: Status 404 returned error can't find the container with id 508913ada0e77f2de6d1431c98dc78f64c5f024098f806d31c462ae5aad67548 Dec 06 07:03:52 crc kubenswrapper[4895]: I1206 07:03:52.565440 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" event={"ID":"b5802056-e992-49ec-aba2-728af99f18b6","Type":"ContainerStarted","Data":"508913ada0e77f2de6d1431c98dc78f64c5f024098f806d31c462ae5aad67548"} Dec 06 07:03:53 crc kubenswrapper[4895]: I1206 07:03:53.573769 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" event={"ID":"b5802056-e992-49ec-aba2-728af99f18b6","Type":"ContainerStarted","Data":"3da80330ed49981aeac3b695e045fbb7e51cc5e11ce4d2d32ef32c9569f78dc6"} Dec 06 07:03:53 crc kubenswrapper[4895]: I1206 07:03:53.574290 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:53 crc kubenswrapper[4895]: I1206 07:03:53.580159 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" Dec 06 07:03:53 crc kubenswrapper[4895]: I1206 07:03:53.595213 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" podStartSLOduration=3.595179766 podStartE2EDuration="3.595179766s" podCreationTimestamp="2025-12-06 07:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:03:53.593202982 +0000 UTC m=+395.994591862" watchObservedRunningTime="2025-12-06 07:03:53.595179766 +0000 UTC m=+395.996568626" Dec 06 07:03:56 crc kubenswrapper[4895]: I1206 07:03:56.187818 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8gpr"] Dec 06 07:03:56 crc kubenswrapper[4895]: I1206 07:03:56.189382 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8gpr" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="registry-server" containerID="cri-o://54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790" gracePeriod=2 Dec 06 07:03:56 crc kubenswrapper[4895]: I1206 07:03:56.382270 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlp2v"] Dec 06 07:03:56 crc kubenswrapper[4895]: I1206 07:03:56.382651 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlp2v" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="registry-server" containerID="cri-o://f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc" gracePeriod=2 Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.225229 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.327192 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwq76\" (UniqueName: \"kubernetes.io/projected/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-kube-api-access-xwq76\") pod \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.328121 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-catalog-content\") pod \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.328181 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-utilities\") pod \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\" (UID: \"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8\") " Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.328918 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-utilities" (OuterVolumeSpecName: "utilities") pod "dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" (UID: "dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.343719 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-kube-api-access-xwq76" (OuterVolumeSpecName: "kube-api-access-xwq76") pod "dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" (UID: "dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8"). InnerVolumeSpecName "kube-api-access-xwq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.373025 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" (UID: "dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.415522 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.429045 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-catalog-content\") pod \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.429122 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txvpq\" (UniqueName: \"kubernetes.io/projected/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-kube-api-access-txvpq\") pod \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.429158 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-utilities\") pod \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\" (UID: \"b5924c97-fec5-45b0-a9a2-8f851c88dfbf\") " Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.429367 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.429389 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwq76\" (UniqueName: \"kubernetes.io/projected/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-kube-api-access-xwq76\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.429403 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.430054 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-utilities" (OuterVolumeSpecName: "utilities") pod "b5924c97-fec5-45b0-a9a2-8f851c88dfbf" (UID: "b5924c97-fec5-45b0-a9a2-8f851c88dfbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.432915 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-kube-api-access-txvpq" (OuterVolumeSpecName: "kube-api-access-txvpq") pod "b5924c97-fec5-45b0-a9a2-8f851c88dfbf" (UID: "b5924c97-fec5-45b0-a9a2-8f851c88dfbf"). InnerVolumeSpecName "kube-api-access-txvpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.493565 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5924c97-fec5-45b0-a9a2-8f851c88dfbf" (UID: "b5924c97-fec5-45b0-a9a2-8f851c88dfbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.530751 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txvpq\" (UniqueName: \"kubernetes.io/projected/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-kube-api-access-txvpq\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.530796 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.530809 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5924c97-fec5-45b0-a9a2-8f851c88dfbf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.598363 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerID="f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc" exitCode=0 Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.598487 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerDied","Data":"f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc"} Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.598461 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlp2v" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.598536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlp2v" event={"ID":"b5924c97-fec5-45b0-a9a2-8f851c88dfbf","Type":"ContainerDied","Data":"52a99497bb45f0a6124269222f006fa569e8e2d7aff3421baca4d1e950de1c43"} Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.598561 4895 scope.go:117] "RemoveContainer" containerID="f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.602170 4895 generic.go:334] "Generic (PLEG): container finished" podID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerID="54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790" exitCode=0 Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.602219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerDied","Data":"54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790"} Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.602254 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8gpr" event={"ID":"dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8","Type":"ContainerDied","Data":"14584348c65f947e99b66213b590032aec5cfbd4ea72d934d83d5aa894a60aab"} Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.602281 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8gpr" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.617213 4895 scope.go:117] "RemoveContainer" containerID="b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.630020 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlp2v"] Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.645707 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlp2v"] Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.652320 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8gpr"] Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.652797 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8gpr"] Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.660489 4895 scope.go:117] "RemoveContainer" containerID="d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.686642 4895 scope.go:117] "RemoveContainer" containerID="f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc" Dec 06 07:03:57 crc kubenswrapper[4895]: E1206 07:03:57.691082 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc\": container with ID starting with f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc not found: ID does not exist" containerID="f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.691133 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc"} err="failed to get container status \"f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc\": rpc error: code = NotFound desc = could not find container \"f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc\": container with ID starting with f7dc6ab5abb5adad9925fd9584b570751b94331f77a29074717c62bdd55ec9cc not found: ID does not exist" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.691167 4895 scope.go:117] "RemoveContainer" containerID="b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411" Dec 06 07:03:57 crc kubenswrapper[4895]: E1206 07:03:57.691574 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411\": container with ID starting with b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411 not found: ID does not exist" containerID="b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.691605 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411"} err="failed to get container status \"b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411\": rpc error: code = NotFound desc = could not find container \"b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411\": container with ID starting with b09e3f11d600da445f87d06d0b72e252870b78d5b69399f386905f8f5f90a411 not found: ID does not exist" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.691626 4895 scope.go:117] "RemoveContainer" containerID="d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752" Dec 06 07:03:57 crc kubenswrapper[4895]: E1206 07:03:57.692101 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752\": container with ID starting with d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752 not found: ID does not exist" containerID="d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.692142 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752"} err="failed to get container status \"d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752\": rpc error: code = NotFound desc = could not find container \"d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752\": container with ID starting with d32133dd1a77f11f1730a3c0ba9cd5da8335cde95dbf8009498b1e2a69fbb752 not found: ID does not exist" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.692172 4895 scope.go:117] "RemoveContainer" containerID="54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.706157 4895 scope.go:117] "RemoveContainer" containerID="76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.724993 4895 scope.go:117] "RemoveContainer" containerID="628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.741861 4895 scope.go:117] "RemoveContainer" containerID="54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790" Dec 06 07:03:57 crc kubenswrapper[4895]: E1206 07:03:57.743628 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790\": container with ID starting with 54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790 not found: ID does not exist" containerID="54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.743657 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790"} err="failed to get container status \"54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790\": rpc error: code = NotFound desc = could not find container \"54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790\": container with ID starting with 54217c4fdb2978c675b6e061f6b19fc0cd44b0f966003e6ac22742fbb8ba6790 not found: ID does not exist" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.743677 4895 scope.go:117] "RemoveContainer" containerID="76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d" Dec 06 07:03:57 crc kubenswrapper[4895]: E1206 07:03:57.744766 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d\": container with ID starting with 76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d not found: ID does not exist" containerID="76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.744803 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d"} err="failed to get container status \"76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d\": rpc error: code = NotFound desc = could not find container \"76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d\": container with ID starting with 76feb2fc710cab21a8d6ab5c9464544045b719ed478dc97cde4359caaae29c2d not found: ID does not exist" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.744817 4895 scope.go:117] "RemoveContainer" containerID="628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a" Dec 06 07:03:57 crc kubenswrapper[4895]: E1206 07:03:57.745253 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a\": container with ID starting with 628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a not found: ID does not exist" containerID="628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a" Dec 06 07:03:57 crc kubenswrapper[4895]: I1206 07:03:57.745293 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a"} err="failed to get container status \"628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a\": rpc error: code = NotFound desc = could not find container \"628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a\": container with ID starting with 628c77aaadf66095c7dbea0d176657778931bac1d603afc078a7a62679dde74a not found: ID does not exist" Dec 06 07:03:58 crc kubenswrapper[4895]: I1206 07:03:58.058811 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" path="/var/lib/kubelet/pods/b5924c97-fec5-45b0-a9a2-8f851c88dfbf/volumes" Dec 06 07:03:58 crc kubenswrapper[4895]: I1206 07:03:58.059995 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" path="/var/lib/kubelet/pods/dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8/volumes" Dec 06 07:03:58 crc kubenswrapper[4895]: I1206 07:03:58.783046 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6wtxh"] Dec 06 07:03:58 crc kubenswrapper[4895]: I1206 07:03:58.783281 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6wtxh" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="registry-server" containerID="cri-o://8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7" gracePeriod=2 Dec 06 07:03:59 crc kubenswrapper[4895]: I1206 07:03:59.695487 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:03:59 crc kubenswrapper[4895]: I1206 07:03:59.695550 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.400146 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.572074 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-catalog-content\") pod \"34e74e8b-145a-45bc-a163-4fbd502bc155\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.572220 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plq9m\" (UniqueName: \"kubernetes.io/projected/34e74e8b-145a-45bc-a163-4fbd502bc155-kube-api-access-plq9m\") pod \"34e74e8b-145a-45bc-a163-4fbd502bc155\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.572263 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-utilities\") pod \"34e74e8b-145a-45bc-a163-4fbd502bc155\" (UID: \"34e74e8b-145a-45bc-a163-4fbd502bc155\") " Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.573341 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-utilities" (OuterVolumeSpecName: "utilities") pod "34e74e8b-145a-45bc-a163-4fbd502bc155" (UID: "34e74e8b-145a-45bc-a163-4fbd502bc155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.580672 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e74e8b-145a-45bc-a163-4fbd502bc155-kube-api-access-plq9m" (OuterVolumeSpecName: "kube-api-access-plq9m") pod "34e74e8b-145a-45bc-a163-4fbd502bc155" (UID: "34e74e8b-145a-45bc-a163-4fbd502bc155"). InnerVolumeSpecName "kube-api-access-plq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.624560 4895 generic.go:334] "Generic (PLEG): container finished" podID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerID="8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7" exitCode=0 Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.624614 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6wtxh" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.624655 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerDied","Data":"8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7"} Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.625110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wtxh" event={"ID":"34e74e8b-145a-45bc-a163-4fbd502bc155","Type":"ContainerDied","Data":"bec812505272707747c69bc654415427b1c96b5bdc3bc24be4d8ee8689cf8768"} Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.625135 4895 scope.go:117] "RemoveContainer" containerID="8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.648285 4895 scope.go:117] "RemoveContainer" containerID="4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.665655 4895 scope.go:117] "RemoveContainer" containerID="e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.673944 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plq9m\" (UniqueName: \"kubernetes.io/projected/34e74e8b-145a-45bc-a163-4fbd502bc155-kube-api-access-plq9m\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.673974 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.685936 4895 scope.go:117] "RemoveContainer" containerID="8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.685989 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34e74e8b-145a-45bc-a163-4fbd502bc155" (UID: "34e74e8b-145a-45bc-a163-4fbd502bc155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:00 crc kubenswrapper[4895]: E1206 07:04:00.687298 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7\": container with ID starting with 8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7 not found: ID does not exist" containerID="8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.687342 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7"} err="failed to get container status \"8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7\": rpc error: code = NotFound desc = could not find container \"8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7\": container with ID starting with 8f5666c086a26d6647330804b41012bd7bc5ace2aeec8555193477a6fb6a14c7 not found: ID does not exist" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.687370 4895 scope.go:117] "RemoveContainer" containerID="4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd" Dec 06 07:04:00 crc kubenswrapper[4895]: E1206 07:04:00.688678 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd\": container with ID starting with 4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd not found: ID does not exist" containerID="4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.688789 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd"} err="failed to get container status \"4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd\": rpc error: code = NotFound desc = could not find container \"4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd\": container with ID starting with 4b5b4f848c415cbc3143ca7a36d3fd15a67824eafc891f378747af70f8844bfd not found: ID does not exist" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.688879 4895 scope.go:117] "RemoveContainer" containerID="e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831" Dec 06 07:04:00 crc kubenswrapper[4895]: E1206 07:04:00.693996 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831\": container with ID starting with e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831 not found: ID does not exist" containerID="e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.694130 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831"} err="failed to get container status \"e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831\": rpc error: code = NotFound desc = could not find container \"e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831\": container with ID starting with e5cc064999aa852e86195a3e63210d2d9da81c48afd49bf3f71d36504ccee831 not found: ID does not exist" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.775227 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e74e8b-145a-45bc-a163-4fbd502bc155-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.952235 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6wtxh"] Dec 06 07:04:00 crc kubenswrapper[4895]: I1206 07:04:00.956542 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6wtxh"] Dec 06 07:04:02 crc kubenswrapper[4895]: I1206 07:04:02.057991 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" path="/var/lib/kubelet/pods/34e74e8b-145a-45bc-a163-4fbd502bc155/volumes" Dec 06 07:04:09 crc kubenswrapper[4895]: I1206 07:04:09.521282 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8c4wc" Dec 06 07:04:09 crc kubenswrapper[4895]: I1206 07:04:09.578661 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s5xjc"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.577696 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zwn6t"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.578779 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zwn6t" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="registry-server" containerID="cri-o://5757e029ec92541a12ac864d4586539610ed51835f67a429563e5e4a394de6f5" gracePeriod=30 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.584822 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j9kw"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.585172 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2j9kw" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="registry-server" containerID="cri-o://a81df0973b975bc5f3297f7a671fe073a4d78a3fdd67a8d46426b4a5c8a0ea69" gracePeriod=30 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.600906 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bnn9x"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.601148 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" containerID="cri-o://57d15672bc7eaaaf1c9b52978882ff6684430b84d0958c2d13d10b273ad641ac" gracePeriod=30 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.605536 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmdtv"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.606002 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xmdtv" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="registry-server" containerID="cri-o://2dcf34f3fe29065065b3fd17dfc5a8bc4ab25c12e16a68dfe8d233c6b8bf2a9a" gracePeriod=30 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.608764 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hn9n9"] Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.608996 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609018 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609032 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="extract-utilities" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609054 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="extract-utilities" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609069 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609077 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609086 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="extract-content" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609093 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="extract-content" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609103 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="extract-utilities" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609109 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="extract-utilities" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609117 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="extract-content" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609125 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="extract-content" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609136 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="extract-content" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609144 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="extract-content" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609158 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609166 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: E1206 07:04:23.609178 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="extract-utilities" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609185 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="extract-utilities" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609312 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e74e8b-145a-45bc-a163-4fbd502bc155" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609335 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0aaa81-6c87-4d94-af3d-30a6e69c5ac8" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609345 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5924c97-fec5-45b0-a9a2-8f851c88dfbf" containerName="registry-server" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.609827 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.613281 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ddnd"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.613879 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ddnd" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="registry-server" containerID="cri-o://b8751671bb6feb9de3320ae03878d3e3f7f7b6fa1cc14d34bfa326389f41ef0e" gracePeriod=30 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.633131 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hn9n9"] Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.711924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.712015 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcm8p\" (UniqueName: \"kubernetes.io/projected/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-kube-api-access-vcm8p\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.712047 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.767173 4895 generic.go:334] "Generic (PLEG): container finished" podID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerID="2dcf34f3fe29065065b3fd17dfc5a8bc4ab25c12e16a68dfe8d233c6b8bf2a9a" exitCode=0 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.767248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerDied","Data":"2dcf34f3fe29065065b3fd17dfc5a8bc4ab25c12e16a68dfe8d233c6b8bf2a9a"} Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.769176 4895 generic.go:334] "Generic (PLEG): container finished" podID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerID="b8751671bb6feb9de3320ae03878d3e3f7f7b6fa1cc14d34bfa326389f41ef0e" exitCode=0 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.769247 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerDied","Data":"b8751671bb6feb9de3320ae03878d3e3f7f7b6fa1cc14d34bfa326389f41ef0e"} Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.770795 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bnn9x_4dc1e914-43fd-450e-922c-6462f78105f9/marketplace-operator/1.log" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.770830 4895 generic.go:334] "Generic (PLEG): container finished" podID="4dc1e914-43fd-450e-922c-6462f78105f9" containerID="57d15672bc7eaaaf1c9b52978882ff6684430b84d0958c2d13d10b273ad641ac" exitCode=0 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.770883 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerDied","Data":"57d15672bc7eaaaf1c9b52978882ff6684430b84d0958c2d13d10b273ad641ac"} Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.770921 4895 scope.go:117] "RemoveContainer" containerID="6e86d3c0c43c5b6b339178db0faf8a48c4488aa25fd0105c8cf29084e7cf0b7d" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.772746 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad3d9f3b-cc45-4169-9476-b15937334205" containerID="a81df0973b975bc5f3297f7a671fe073a4d78a3fdd67a8d46426b4a5c8a0ea69" exitCode=0 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.772798 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerDied","Data":"a81df0973b975bc5f3297f7a671fe073a4d78a3fdd67a8d46426b4a5c8a0ea69"} Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.775600 4895 generic.go:334] "Generic (PLEG): container finished" podID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerID="5757e029ec92541a12ac864d4586539610ed51835f67a429563e5e4a394de6f5" exitCode=0 Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.775634 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerDied","Data":"5757e029ec92541a12ac864d4586539610ed51835f67a429563e5e4a394de6f5"} Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.813938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcm8p\" (UniqueName: \"kubernetes.io/projected/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-kube-api-access-vcm8p\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.813993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.814072 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.815609 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.822106 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.829163 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcm8p\" (UniqueName: \"kubernetes.io/projected/9b65f0c8-5905-4b80-a9c8-1704be25ec8e-kube-api-access-vcm8p\") pod \"marketplace-operator-79b997595-hn9n9\" (UID: \"9b65f0c8-5905-4b80-a9c8-1704be25ec8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:23 crc kubenswrapper[4895]: I1206 07:04:23.935035 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.109384 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.141955 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.227684 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zcb8\" (UniqueName: \"kubernetes.io/projected/c57d6deb-02dd-473c-b644-7ef7a1f8e500-kube-api-access-4zcb8\") pod \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.227736 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-utilities\") pod \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.227817 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-catalog-content\") pod \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\" (UID: \"c57d6deb-02dd-473c-b644-7ef7a1f8e500\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.229704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-utilities" (OuterVolumeSpecName: "utilities") pod "c57d6deb-02dd-473c-b644-7ef7a1f8e500" (UID: "c57d6deb-02dd-473c-b644-7ef7a1f8e500"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.234383 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57d6deb-02dd-473c-b644-7ef7a1f8e500-kube-api-access-4zcb8" (OuterVolumeSpecName: "kube-api-access-4zcb8") pod "c57d6deb-02dd-473c-b644-7ef7a1f8e500" (UID: "c57d6deb-02dd-473c-b644-7ef7a1f8e500"). InnerVolumeSpecName "kube-api-access-4zcb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.238905 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hn9n9"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.246692 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.255282 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c57d6deb-02dd-473c-b644-7ef7a1f8e500" (UID: "c57d6deb-02dd-473c-b644-7ef7a1f8e500"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.329167 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc4vj\" (UniqueName: \"kubernetes.io/projected/4dc1e914-43fd-450e-922c-6462f78105f9-kube-api-access-wc4vj\") pod \"4dc1e914-43fd-450e-922c-6462f78105f9\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.329273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-operator-metrics\") pod \"4dc1e914-43fd-450e-922c-6462f78105f9\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.329313 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-trusted-ca\") pod \"4dc1e914-43fd-450e-922c-6462f78105f9\" (UID: \"4dc1e914-43fd-450e-922c-6462f78105f9\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.329562 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.329580 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zcb8\" (UniqueName: \"kubernetes.io/projected/c57d6deb-02dd-473c-b644-7ef7a1f8e500-kube-api-access-4zcb8\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.329592 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57d6deb-02dd-473c-b644-7ef7a1f8e500-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.331125 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4dc1e914-43fd-450e-922c-6462f78105f9" (UID: "4dc1e914-43fd-450e-922c-6462f78105f9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.334013 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4dc1e914-43fd-450e-922c-6462f78105f9" (UID: "4dc1e914-43fd-450e-922c-6462f78105f9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.334134 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc1e914-43fd-450e-922c-6462f78105f9-kube-api-access-wc4vj" (OuterVolumeSpecName: "kube-api-access-wc4vj") pod "4dc1e914-43fd-450e-922c-6462f78105f9" (UID: "4dc1e914-43fd-450e-922c-6462f78105f9"). InnerVolumeSpecName "kube-api-access-wc4vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.430981 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-catalog-content\") pod \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.431069 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55kb\" (UniqueName: \"kubernetes.io/projected/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-kube-api-access-f55kb\") pod \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.431224 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-utilities\") pod \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\" (UID: \"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.431585 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc4vj\" (UniqueName: \"kubernetes.io/projected/4dc1e914-43fd-450e-922c-6462f78105f9-kube-api-access-wc4vj\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.431620 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.431630 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dc1e914-43fd-450e-922c-6462f78105f9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.432233 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-utilities" (OuterVolumeSpecName: "utilities") pod "7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" (UID: "7424c4ea-3bfe-4af7-afc3-e2ed98d36e75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.434978 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-kube-api-access-f55kb" (OuterVolumeSpecName: "kube-api-access-f55kb") pod "7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" (UID: "7424c4ea-3bfe-4af7-afc3-e2ed98d36e75"). InnerVolumeSpecName "kube-api-access-f55kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.533684 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.533762 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55kb\" (UniqueName: \"kubernetes.io/projected/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-kube-api-access-f55kb\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.546817 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" (UID: "7424c4ea-3bfe-4af7-afc3-e2ed98d36e75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.634677 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.768595 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.783517 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" event={"ID":"4dc1e914-43fd-450e-922c-6462f78105f9","Type":"ContainerDied","Data":"06e98f62be8d971557eaf82d11c83af6366c980308b310cbd2734998858bc8f6"} Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.783580 4895 scope.go:117] "RemoveContainer" containerID="57d15672bc7eaaaf1c9b52978882ff6684430b84d0958c2d13d10b273ad641ac" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.783688 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bnn9x" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.792816 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwn6t" event={"ID":"c6ec729e-b8e3-42ad-84a0-ded336274afd","Type":"ContainerDied","Data":"78e944e8e944db6e735cb6a69ae7cd5a3f89004c1bf7e0264841c11ee4a0c105"} Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.792974 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwn6t" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.803262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddnd" event={"ID":"7424c4ea-3bfe-4af7-afc3-e2ed98d36e75","Type":"ContainerDied","Data":"d52683215c65cb717282c1e1d421f2ff36cf526709a83062b5c370659941a86b"} Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.803402 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddnd" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.812125 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmdtv" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.812140 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmdtv" event={"ID":"c57d6deb-02dd-473c-b644-7ef7a1f8e500","Type":"ContainerDied","Data":"a4d0e2bdef3aea6817f817b37e44e66ff8ca88edb97fa38bd2e6669663544143"} Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.823300 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" event={"ID":"9b65f0c8-5905-4b80-a9c8-1704be25ec8e","Type":"ContainerStarted","Data":"523eb2ce1a26abd663194ef1963917d3a003bd9e75baa573a0bb62e6904eeb83"} Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.823345 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" event={"ID":"9b65f0c8-5905-4b80-a9c8-1704be25ec8e","Type":"ContainerStarted","Data":"6c56961e16f4c68768f7285733da473634f2139104d602b039b8a22bfb8f8007"} Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.823633 4895 scope.go:117] "RemoveContainer" containerID="5757e029ec92541a12ac864d4586539610ed51835f67a429563e5e4a394de6f5" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.825268 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bnn9x"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.832116 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bnn9x"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.860018 4895 scope.go:117] "RemoveContainer" containerID="419f4b441bb996779e15606f1a7cf9543e782abe52f4fb95335d53d92b68b572" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.887254 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ddnd"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.901238 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ddnd"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.911736 4895 scope.go:117] "RemoveContainer" containerID="2d07d427e3c9fcac5716f08aefab874d4138ff27963b69a4c9e3554a9c9d11c7" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.915386 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmdtv"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.919534 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.924208 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmdtv"] Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.932034 4895 scope.go:117] "RemoveContainer" containerID="b8751671bb6feb9de3320ae03878d3e3f7f7b6fa1cc14d34bfa326389f41ef0e" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.965662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8552\" (UniqueName: \"kubernetes.io/projected/c6ec729e-b8e3-42ad-84a0-ded336274afd-kube-api-access-n8552\") pod \"c6ec729e-b8e3-42ad-84a0-ded336274afd\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.966180 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-utilities\") pod \"c6ec729e-b8e3-42ad-84a0-ded336274afd\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.968108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-catalog-content\") pod \"c6ec729e-b8e3-42ad-84a0-ded336274afd\" (UID: \"c6ec729e-b8e3-42ad-84a0-ded336274afd\") " Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.968022 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-utilities" (OuterVolumeSpecName: "utilities") pod "c6ec729e-b8e3-42ad-84a0-ded336274afd" (UID: "c6ec729e-b8e3-42ad-84a0-ded336274afd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.969219 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.979350 4895 scope.go:117] "RemoveContainer" containerID="4d1dca11ac49e7119a5d20d66e05dd31597629ff2edfc131a9d5c3622146e54d" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.979361 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ec729e-b8e3-42ad-84a0-ded336274afd-kube-api-access-n8552" (OuterVolumeSpecName: "kube-api-access-n8552") pod "c6ec729e-b8e3-42ad-84a0-ded336274afd" (UID: "c6ec729e-b8e3-42ad-84a0-ded336274afd"). InnerVolumeSpecName "kube-api-access-n8552". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:24 crc kubenswrapper[4895]: I1206 07:04:24.999784 4895 scope.go:117] "RemoveContainer" containerID="44e47111f8d2d809d4054ec2df48af831ad0fddea70be4156115cc27abb61945" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.014738 4895 scope.go:117] "RemoveContainer" containerID="2dcf34f3fe29065065b3fd17dfc5a8bc4ab25c12e16a68dfe8d233c6b8bf2a9a" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.035596 4895 scope.go:117] "RemoveContainer" containerID="42d87fe5ad9f66e22b70125fc95763264186389d20120f67b49bb9f62839c98c" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.035852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6ec729e-b8e3-42ad-84a0-ded336274afd" (UID: "c6ec729e-b8e3-42ad-84a0-ded336274afd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.057320 4895 scope.go:117] "RemoveContainer" containerID="d85f4525280512d8721506853b36c79e6d2d55e6ce09141df359be413123c443" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.070917 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs9sf\" (UniqueName: \"kubernetes.io/projected/ad3d9f3b-cc45-4169-9476-b15937334205-kube-api-access-rs9sf\") pod \"ad3d9f3b-cc45-4169-9476-b15937334205\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.070979 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-utilities\") pod \"ad3d9f3b-cc45-4169-9476-b15937334205\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.071007 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-catalog-content\") pod \"ad3d9f3b-cc45-4169-9476-b15937334205\" (UID: \"ad3d9f3b-cc45-4169-9476-b15937334205\") " Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.071308 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ec729e-b8e3-42ad-84a0-ded336274afd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.071331 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8552\" (UniqueName: \"kubernetes.io/projected/c6ec729e-b8e3-42ad-84a0-ded336274afd-kube-api-access-n8552\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.072010 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-utilities" (OuterVolumeSpecName: "utilities") pod "ad3d9f3b-cc45-4169-9476-b15937334205" (UID: "ad3d9f3b-cc45-4169-9476-b15937334205"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.074531 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3d9f3b-cc45-4169-9476-b15937334205-kube-api-access-rs9sf" (OuterVolumeSpecName: "kube-api-access-rs9sf") pod "ad3d9f3b-cc45-4169-9476-b15937334205" (UID: "ad3d9f3b-cc45-4169-9476-b15937334205"). InnerVolumeSpecName "kube-api-access-rs9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.124809 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zwn6t"] Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.126540 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad3d9f3b-cc45-4169-9476-b15937334205" (UID: "ad3d9f3b-cc45-4169-9476-b15937334205"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.130265 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zwn6t"] Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.171994 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs9sf\" (UniqueName: \"kubernetes.io/projected/ad3d9f3b-cc45-4169-9476-b15937334205-kube-api-access-rs9sf\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.172022 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.172031 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3d9f3b-cc45-4169-9476-b15937334205-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790072 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zms68"] Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790253 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790265 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790275 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790281 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790292 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790300 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790308 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790315 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790327 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790362 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790374 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790381 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790390 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790397 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790406 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790412 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790427 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790435 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790445 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790451 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790463 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790488 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790497 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790503 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790512 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790519 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790529 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790534 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="extract-utilities" Dec 06 07:04:25 crc kubenswrapper[4895]: E1206 07:04:25.790544 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790549 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="extract-content" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790637 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790645 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790652 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790660 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790669 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790677 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" containerName="registry-server" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.790831 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" containerName="marketplace-operator" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.791336 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.793180 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.802495 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zms68"] Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.834886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9kw" event={"ID":"ad3d9f3b-cc45-4169-9476-b15937334205","Type":"ContainerDied","Data":"29a7d5c58dcbe5f4b597ed541283960df2bedb16d565eeadfb02c697153cf36f"} Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.834943 4895 scope.go:117] "RemoveContainer" containerID="a81df0973b975bc5f3297f7a671fe073a4d78a3fdd67a8d46426b4a5c8a0ea69" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.834899 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9kw" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.836192 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.843647 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.858827 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hn9n9" podStartSLOduration=2.8588059980000002 podStartE2EDuration="2.858805998s" podCreationTimestamp="2025-12-06 07:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:04:25.856848335 +0000 UTC m=+428.258237235" watchObservedRunningTime="2025-12-06 07:04:25.858805998 +0000 UTC m=+428.260194868" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.862967 4895 scope.go:117] "RemoveContainer" containerID="03b0c7b7187aa2f9342894a9e567a35de43977b2ffd8b9d1c58f469e6f967eb6" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.885626 4895 scope.go:117] "RemoveContainer" containerID="6364b9b146e8f8b504e14777abe398f4207fde607a976da5bb244b1b1f60ef9b" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.889000 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j9kw"] Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.894077 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2j9kw"] Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.981111 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5c2b\" (UniqueName: \"kubernetes.io/projected/6bbad132-928f-4f02-bbfc-a6b66eeec395-kube-api-access-r5c2b\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.981201 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbad132-928f-4f02-bbfc-a6b66eeec395-catalog-content\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:25 crc kubenswrapper[4895]: I1206 07:04:25.981253 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbad132-928f-4f02-bbfc-a6b66eeec395-utilities\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.056303 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc1e914-43fd-450e-922c-6462f78105f9" path="/var/lib/kubelet/pods/4dc1e914-43fd-450e-922c-6462f78105f9/volumes" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.056836 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7424c4ea-3bfe-4af7-afc3-e2ed98d36e75" path="/var/lib/kubelet/pods/7424c4ea-3bfe-4af7-afc3-e2ed98d36e75/volumes" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.057388 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3d9f3b-cc45-4169-9476-b15937334205" path="/var/lib/kubelet/pods/ad3d9f3b-cc45-4169-9476-b15937334205/volumes" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.058427 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57d6deb-02dd-473c-b644-7ef7a1f8e500" path="/var/lib/kubelet/pods/c57d6deb-02dd-473c-b644-7ef7a1f8e500/volumes" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.059062 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ec729e-b8e3-42ad-84a0-ded336274afd" path="/var/lib/kubelet/pods/c6ec729e-b8e3-42ad-84a0-ded336274afd/volumes" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.081911 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5c2b\" (UniqueName: \"kubernetes.io/projected/6bbad132-928f-4f02-bbfc-a6b66eeec395-kube-api-access-r5c2b\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.081993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbad132-928f-4f02-bbfc-a6b66eeec395-catalog-content\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.082076 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbad132-928f-4f02-bbfc-a6b66eeec395-utilities\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.082635 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbad132-928f-4f02-bbfc-a6b66eeec395-utilities\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.083672 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbad132-928f-4f02-bbfc-a6b66eeec395-catalog-content\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.106061 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5c2b\" (UniqueName: \"kubernetes.io/projected/6bbad132-928f-4f02-bbfc-a6b66eeec395-kube-api-access-r5c2b\") pod \"redhat-marketplace-zms68\" (UID: \"6bbad132-928f-4f02-bbfc-a6b66eeec395\") " pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.123334 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.542045 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zms68"] Dec 06 07:04:26 crc kubenswrapper[4895]: W1206 07:04:26.544835 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bbad132_928f_4f02_bbfc_a6b66eeec395.slice/crio-e7f0a10d48a2f4a57c6cc39fc542d68aa4d4415798eaa485fbe652629c3eda23 WatchSource:0}: Error finding container e7f0a10d48a2f4a57c6cc39fc542d68aa4d4415798eaa485fbe652629c3eda23: Status 404 returned error can't find the container with id e7f0a10d48a2f4a57c6cc39fc542d68aa4d4415798eaa485fbe652629c3eda23 Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.854076 4895 generic.go:334] "Generic (PLEG): container finished" podID="6bbad132-928f-4f02-bbfc-a6b66eeec395" containerID="9cbadc458eb1676382834b7a10e80ca55ffc4ca191713f9a6b2d6d6fc4b55ba2" exitCode=0 Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.855597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zms68" event={"ID":"6bbad132-928f-4f02-bbfc-a6b66eeec395","Type":"ContainerDied","Data":"9cbadc458eb1676382834b7a10e80ca55ffc4ca191713f9a6b2d6d6fc4b55ba2"} Dec 06 07:04:26 crc kubenswrapper[4895]: I1206 07:04:26.855662 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zms68" event={"ID":"6bbad132-928f-4f02-bbfc-a6b66eeec395","Type":"ContainerStarted","Data":"e7f0a10d48a2f4a57c6cc39fc542d68aa4d4415798eaa485fbe652629c3eda23"} Dec 06 07:04:27 crc kubenswrapper[4895]: I1206 07:04:27.995330 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkv7v"] Dec 06 07:04:27 crc kubenswrapper[4895]: I1206 07:04:27.996963 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:27 crc kubenswrapper[4895]: I1206 07:04:27.999649 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.010698 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkv7v"] Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.108497 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2611889f-6582-4711-8d3a-c93dd57ba6fc-catalog-content\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.108554 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2611889f-6582-4711-8d3a-c93dd57ba6fc-utilities\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.108597 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klc5\" (UniqueName: \"kubernetes.io/projected/2611889f-6582-4711-8d3a-c93dd57ba6fc-kube-api-access-4klc5\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.183920 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4d8pr"] Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.184822 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.186584 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.198191 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d8pr"] Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.210256 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2611889f-6582-4711-8d3a-c93dd57ba6fc-catalog-content\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.210297 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2611889f-6582-4711-8d3a-c93dd57ba6fc-utilities\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.210338 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klc5\" (UniqueName: \"kubernetes.io/projected/2611889f-6582-4711-8d3a-c93dd57ba6fc-kube-api-access-4klc5\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.211076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2611889f-6582-4711-8d3a-c93dd57ba6fc-catalog-content\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.211118 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2611889f-6582-4711-8d3a-c93dd57ba6fc-utilities\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.232021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klc5\" (UniqueName: \"kubernetes.io/projected/2611889f-6582-4711-8d3a-c93dd57ba6fc-kube-api-access-4klc5\") pod \"redhat-operators-wkv7v\" (UID: \"2611889f-6582-4711-8d3a-c93dd57ba6fc\") " pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.311023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ed02d8a-6fba-458b-bab9-e595922d1f1f-utilities\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.311106 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqb9\" (UniqueName: \"kubernetes.io/projected/4ed02d8a-6fba-458b-bab9-e595922d1f1f-kube-api-access-nfqb9\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.311165 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ed02d8a-6fba-458b-bab9-e595922d1f1f-catalog-content\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.317060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.430640 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ed02d8a-6fba-458b-bab9-e595922d1f1f-utilities\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.430924 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfqb9\" (UniqueName: \"kubernetes.io/projected/4ed02d8a-6fba-458b-bab9-e595922d1f1f-kube-api-access-nfqb9\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.430958 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ed02d8a-6fba-458b-bab9-e595922d1f1f-catalog-content\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.431284 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ed02d8a-6fba-458b-bab9-e595922d1f1f-utilities\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.431308 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ed02d8a-6fba-458b-bab9-e595922d1f1f-catalog-content\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.449436 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfqb9\" (UniqueName: \"kubernetes.io/projected/4ed02d8a-6fba-458b-bab9-e595922d1f1f-kube-api-access-nfqb9\") pod \"certified-operators-4d8pr\" (UID: \"4ed02d8a-6fba-458b-bab9-e595922d1f1f\") " pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.500958 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.540783 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkv7v"] Dec 06 07:04:28 crc kubenswrapper[4895]: W1206 07:04:28.550728 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2611889f_6582_4711_8d3a_c93dd57ba6fc.slice/crio-669c348684c73a0cde35b884bd34297923a140a0f581ef63badc8a35bcd33229 WatchSource:0}: Error finding container 669c348684c73a0cde35b884bd34297923a140a0f581ef63badc8a35bcd33229: Status 404 returned error can't find the container with id 669c348684c73a0cde35b884bd34297923a140a0f581ef63badc8a35bcd33229 Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.865115 4895 generic.go:334] "Generic (PLEG): container finished" podID="2611889f-6582-4711-8d3a-c93dd57ba6fc" containerID="5e429003aacb014b7312fafc4948894f131ce98182a4ca80b1eaf604d0685b80" exitCode=0 Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.865163 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkv7v" event={"ID":"2611889f-6582-4711-8d3a-c93dd57ba6fc","Type":"ContainerDied","Data":"5e429003aacb014b7312fafc4948894f131ce98182a4ca80b1eaf604d0685b80"} Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.865192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkv7v" event={"ID":"2611889f-6582-4711-8d3a-c93dd57ba6fc","Type":"ContainerStarted","Data":"669c348684c73a0cde35b884bd34297923a140a0f581ef63badc8a35bcd33229"} Dec 06 07:04:28 crc kubenswrapper[4895]: I1206 07:04:28.900132 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d8pr"] Dec 06 07:04:28 crc kubenswrapper[4895]: W1206 07:04:28.923585 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ed02d8a_6fba_458b_bab9_e595922d1f1f.slice/crio-7b6e996d140e94f451bccfb3a24d4251c529c51c387c8893daa01c9d74f81037 WatchSource:0}: Error finding container 7b6e996d140e94f451bccfb3a24d4251c529c51c387c8893daa01c9d74f81037: Status 404 returned error can't find the container with id 7b6e996d140e94f451bccfb3a24d4251c529c51c387c8893daa01c9d74f81037 Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.695335 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.695645 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.695687 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.696150 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c18110f799eaeab538c5a5c8ddde83eb9d7b4f8f79d73523c855405598e83192"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.696193 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://c18110f799eaeab538c5a5c8ddde83eb9d7b4f8f79d73523c855405598e83192" gracePeriod=600 Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.870679 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ed02d8a-6fba-458b-bab9-e595922d1f1f" containerID="50dd7c18b982ff47a426424a09fbd7cf93f8344a53c50e08c658b477b9b26cd7" exitCode=0 Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.870779 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d8pr" event={"ID":"4ed02d8a-6fba-458b-bab9-e595922d1f1f","Type":"ContainerDied","Data":"50dd7c18b982ff47a426424a09fbd7cf93f8344a53c50e08c658b477b9b26cd7"} Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.870826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d8pr" event={"ID":"4ed02d8a-6fba-458b-bab9-e595922d1f1f","Type":"ContainerStarted","Data":"7b6e996d140e94f451bccfb3a24d4251c529c51c387c8893daa01c9d74f81037"} Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.873144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkv7v" event={"ID":"2611889f-6582-4711-8d3a-c93dd57ba6fc","Type":"ContainerStarted","Data":"aaa1661dd5c1b7489fcafd02e4ad2b8dfcb3dc775ce68c412faf2c11a77f6552"} Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.878980 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="c18110f799eaeab538c5a5c8ddde83eb9d7b4f8f79d73523c855405598e83192" exitCode=0 Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.879048 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"c18110f799eaeab538c5a5c8ddde83eb9d7b4f8f79d73523c855405598e83192"} Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.879092 4895 scope.go:117] "RemoveContainer" containerID="b4a5ada8b9b7b0b71f20aa7804422775ef7262cbaf3a079b17776696ce428d57" Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.881284 4895 generic.go:334] "Generic (PLEG): container finished" podID="6bbad132-928f-4f02-bbfc-a6b66eeec395" containerID="b47871b83aacefb2a12037b2b8f9f4635360d59a5040d5fc3bc6d693cf67ffb4" exitCode=0 Dec 06 07:04:29 crc kubenswrapper[4895]: I1206 07:04:29.881307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zms68" event={"ID":"6bbad132-928f-4f02-bbfc-a6b66eeec395","Type":"ContainerDied","Data":"b47871b83aacefb2a12037b2b8f9f4635360d59a5040d5fc3bc6d693cf67ffb4"} Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.389951 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jpbcj"] Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.392590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.395286 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.395654 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpbcj"] Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.457902 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-catalog-content\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.457938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-utilities\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.457993 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgk6\" (UniqueName: \"kubernetes.io/projected/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-kube-api-access-5lgk6\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.562105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-catalog-content\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.562172 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-utilities\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.562244 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgk6\" (UniqueName: \"kubernetes.io/projected/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-kube-api-access-5lgk6\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.562890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-catalog-content\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.562992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-utilities\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.583508 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgk6\" (UniqueName: \"kubernetes.io/projected/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-kube-api-access-5lgk6\") pod \"community-operators-jpbcj\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.717636 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.887435 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d8pr" event={"ID":"4ed02d8a-6fba-458b-bab9-e595922d1f1f","Type":"ContainerStarted","Data":"c7348ef754bb197b960e2358716dad338259b7564820d2ef31fdc103443ba620"} Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.891696 4895 generic.go:334] "Generic (PLEG): container finished" podID="2611889f-6582-4711-8d3a-c93dd57ba6fc" containerID="aaa1661dd5c1b7489fcafd02e4ad2b8dfcb3dc775ce68c412faf2c11a77f6552" exitCode=0 Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.891820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkv7v" event={"ID":"2611889f-6582-4711-8d3a-c93dd57ba6fc","Type":"ContainerDied","Data":"aaa1661dd5c1b7489fcafd02e4ad2b8dfcb3dc775ce68c412faf2c11a77f6552"} Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.897863 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"1f271a8fe75022dffff5e89d9b9a496adc91c0889bc010b26e18a8f177f7c74a"} Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.902257 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zms68" event={"ID":"6bbad132-928f-4f02-bbfc-a6b66eeec395","Type":"ContainerStarted","Data":"992b98a4ea49ae878c9e2018d15665a156619a4b816f19f4999c8d92848e2610"} Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.943639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpbcj"] Dec 06 07:04:30 crc kubenswrapper[4895]: I1206 07:04:30.987402 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zms68" podStartSLOduration=2.571085399 podStartE2EDuration="5.987385345s" podCreationTimestamp="2025-12-06 07:04:25 +0000 UTC" firstStartedPulling="2025-12-06 07:04:26.85863692 +0000 UTC m=+429.260025800" lastFinishedPulling="2025-12-06 07:04:30.274936876 +0000 UTC m=+432.676325746" observedRunningTime="2025-12-06 07:04:30.985678748 +0000 UTC m=+433.387067628" watchObservedRunningTime="2025-12-06 07:04:30.987385345 +0000 UTC m=+433.388774205" Dec 06 07:04:30 crc kubenswrapper[4895]: W1206 07:04:30.999392 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f772ae1_b7e4_4762_ad35_7d9ca598ef8c.slice/crio-3c50b62053f3d4ce93650ba16df59f0fb1fbc4e1e5f60bedfa86f4850b77d5c7 WatchSource:0}: Error finding container 3c50b62053f3d4ce93650ba16df59f0fb1fbc4e1e5f60bedfa86f4850b77d5c7: Status 404 returned error can't find the container with id 3c50b62053f3d4ce93650ba16df59f0fb1fbc4e1e5f60bedfa86f4850b77d5c7 Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.919092 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerID="e98c307e62a906bb56dd7ecd42415ecd75d3359d629c89824503eca6edb2d087" exitCode=0 Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.919174 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpbcj" event={"ID":"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c","Type":"ContainerDied","Data":"e98c307e62a906bb56dd7ecd42415ecd75d3359d629c89824503eca6edb2d087"} Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.919219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpbcj" event={"ID":"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c","Type":"ContainerStarted","Data":"3c50b62053f3d4ce93650ba16df59f0fb1fbc4e1e5f60bedfa86f4850b77d5c7"} Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.920954 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ed02d8a-6fba-458b-bab9-e595922d1f1f" containerID="c7348ef754bb197b960e2358716dad338259b7564820d2ef31fdc103443ba620" exitCode=0 Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.921010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d8pr" event={"ID":"4ed02d8a-6fba-458b-bab9-e595922d1f1f","Type":"ContainerDied","Data":"c7348ef754bb197b960e2358716dad338259b7564820d2ef31fdc103443ba620"} Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.931608 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkv7v" event={"ID":"2611889f-6582-4711-8d3a-c93dd57ba6fc","Type":"ContainerStarted","Data":"595303ccfe347f2a2fca05cf6fdd87d82e33d2cd30670469668bcfb5d08b0156"} Dec 06 07:04:31 crc kubenswrapper[4895]: I1206 07:04:31.993769 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkv7v" podStartSLOduration=2.5691292900000002 podStartE2EDuration="4.993748065s" podCreationTimestamp="2025-12-06 07:04:27 +0000 UTC" firstStartedPulling="2025-12-06 07:04:28.881599708 +0000 UTC m=+431.282988578" lastFinishedPulling="2025-12-06 07:04:31.306218483 +0000 UTC m=+433.707607353" observedRunningTime="2025-12-06 07:04:31.988737978 +0000 UTC m=+434.390126868" watchObservedRunningTime="2025-12-06 07:04:31.993748065 +0000 UTC m=+434.395136935" Dec 06 07:04:32 crc kubenswrapper[4895]: I1206 07:04:32.938583 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerID="b59a43890d51bbb13a79625d85bffa65141a6da54a09d800809c06b591aede29" exitCode=0 Dec 06 07:04:32 crc kubenswrapper[4895]: I1206 07:04:32.938810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpbcj" event={"ID":"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c","Type":"ContainerDied","Data":"b59a43890d51bbb13a79625d85bffa65141a6da54a09d800809c06b591aede29"} Dec 06 07:04:32 crc kubenswrapper[4895]: I1206 07:04:32.948201 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d8pr" event={"ID":"4ed02d8a-6fba-458b-bab9-e595922d1f1f","Type":"ContainerStarted","Data":"9863bc8f5da3ea020b8a1f4ea2f32faa6538d617a8724e5621efc621a5b96cfd"} Dec 06 07:04:32 crc kubenswrapper[4895]: I1206 07:04:32.983784 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4d8pr" podStartSLOduration=2.524021282 podStartE2EDuration="4.98376299s" podCreationTimestamp="2025-12-06 07:04:28 +0000 UTC" firstStartedPulling="2025-12-06 07:04:29.872037076 +0000 UTC m=+432.273425946" lastFinishedPulling="2025-12-06 07:04:32.331778784 +0000 UTC m=+434.733167654" observedRunningTime="2025-12-06 07:04:32.980346058 +0000 UTC m=+435.381734958" watchObservedRunningTime="2025-12-06 07:04:32.98376299 +0000 UTC m=+435.385151860" Dec 06 07:04:33 crc kubenswrapper[4895]: I1206 07:04:33.962337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpbcj" event={"ID":"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c","Type":"ContainerStarted","Data":"bd0913d7b2c23e7a24fd476c552b70b498420af6268664562aa1c6ad2be4dc4d"} Dec 06 07:04:33 crc kubenswrapper[4895]: I1206 07:04:33.987691 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jpbcj" podStartSLOduration=2.566811426 podStartE2EDuration="3.987673923s" podCreationTimestamp="2025-12-06 07:04:30 +0000 UTC" firstStartedPulling="2025-12-06 07:04:31.929639123 +0000 UTC m=+434.331027993" lastFinishedPulling="2025-12-06 07:04:33.35050162 +0000 UTC m=+435.751890490" observedRunningTime="2025-12-06 07:04:33.984993881 +0000 UTC m=+436.386382751" watchObservedRunningTime="2025-12-06 07:04:33.987673923 +0000 UTC m=+436.389062793" Dec 06 07:04:34 crc kubenswrapper[4895]: I1206 07:04:34.627901 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" podUID="c669ea24-666a-4152-9a1e-43f614bf8e21" containerName="registry" containerID="cri-o://c6c0960cb740b035234175a95c137d4bb7e81f5d9a5b6d05cb4ac874f92860a7" gracePeriod=30 Dec 06 07:04:35 crc kubenswrapper[4895]: I1206 07:04:35.985116 4895 generic.go:334] "Generic (PLEG): container finished" podID="c669ea24-666a-4152-9a1e-43f614bf8e21" containerID="c6c0960cb740b035234175a95c137d4bb7e81f5d9a5b6d05cb4ac874f92860a7" exitCode=0 Dec 06 07:04:35 crc kubenswrapper[4895]: I1206 07:04:35.985224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" event={"ID":"c669ea24-666a-4152-9a1e-43f614bf8e21","Type":"ContainerDied","Data":"c6c0960cb740b035234175a95c137d4bb7e81f5d9a5b6d05cb4ac874f92860a7"} Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.123609 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.123678 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.125112 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.188111 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233420 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-tls\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233463 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-certificates\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233520 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-trusted-ca\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c669ea24-666a-4152-9a1e-43f614bf8e21-installation-pull-secrets\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233609 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c669ea24-666a-4152-9a1e-43f614bf8e21-ca-trust-extracted\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233738 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233772 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ds5\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-kube-api-access-w6ds5\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.233818 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-bound-sa-token\") pod \"c669ea24-666a-4152-9a1e-43f614bf8e21\" (UID: \"c669ea24-666a-4152-9a1e-43f614bf8e21\") " Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.234730 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.234968 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.241461 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.241825 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.243109 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-kube-api-access-w6ds5" (OuterVolumeSpecName: "kube-api-access-w6ds5") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "kube-api-access-w6ds5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.244205 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c669ea24-666a-4152-9a1e-43f614bf8e21-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.245253 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.251709 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c669ea24-666a-4152-9a1e-43f614bf8e21-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c669ea24-666a-4152-9a1e-43f614bf8e21" (UID: "c669ea24-666a-4152-9a1e-43f614bf8e21"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338257 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338309 4895 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c669ea24-666a-4152-9a1e-43f614bf8e21-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338325 4895 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c669ea24-666a-4152-9a1e-43f614bf8e21-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338338 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6ds5\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-kube-api-access-w6ds5\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338356 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338370 4895 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:36 crc kubenswrapper[4895]: I1206 07:04:36.338382 4895 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c669ea24-666a-4152-9a1e-43f614bf8e21-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:37 crc kubenswrapper[4895]: I1206 07:04:37.000497 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" event={"ID":"c669ea24-666a-4152-9a1e-43f614bf8e21","Type":"ContainerDied","Data":"66fe34e19b16bfcbebf8935f75fbd1f670417790d3eecd2dd25e44916c8ebc5b"} Dec 06 07:04:37 crc kubenswrapper[4895]: I1206 07:04:37.000551 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s5xjc" Dec 06 07:04:37 crc kubenswrapper[4895]: I1206 07:04:37.000571 4895 scope.go:117] "RemoveContainer" containerID="c6c0960cb740b035234175a95c137d4bb7e81f5d9a5b6d05cb4ac874f92860a7" Dec 06 07:04:37 crc kubenswrapper[4895]: I1206 07:04:37.027590 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s5xjc"] Dec 06 07:04:37 crc kubenswrapper[4895]: I1206 07:04:37.031330 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s5xjc"] Dec 06 07:04:37 crc kubenswrapper[4895]: I1206 07:04:37.045385 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zms68" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.057422 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c669ea24-666a-4152-9a1e-43f614bf8e21" path="/var/lib/kubelet/pods/c669ea24-666a-4152-9a1e-43f614bf8e21/volumes" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.317840 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.319522 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.358599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.503116 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.503185 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:38 crc kubenswrapper[4895]: I1206 07:04:38.561497 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:39 crc kubenswrapper[4895]: I1206 07:04:39.053339 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkv7v" Dec 06 07:04:39 crc kubenswrapper[4895]: I1206 07:04:39.065610 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4d8pr" Dec 06 07:04:40 crc kubenswrapper[4895]: I1206 07:04:40.718408 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:40 crc kubenswrapper[4895]: I1206 07:04:40.718978 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:40 crc kubenswrapper[4895]: I1206 07:04:40.767572 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:04:41 crc kubenswrapper[4895]: I1206 07:04:41.074432 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:06:18 crc kubenswrapper[4895]: I1206 07:06:18.329068 4895 scope.go:117] "RemoveContainer" containerID="2c9ca7857cdec08dcb4421458f568c475f4bf791fd65f964c728bc638a73b2d7" Dec 06 07:06:29 crc kubenswrapper[4895]: I1206 07:06:29.696011 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:06:29 crc kubenswrapper[4895]: I1206 07:06:29.696571 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:06:59 crc kubenswrapper[4895]: I1206 07:06:59.695943 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:06:59 crc kubenswrapper[4895]: I1206 07:06:59.696495 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:07:29 crc kubenswrapper[4895]: I1206 07:07:29.696235 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:07:29 crc kubenswrapper[4895]: I1206 07:07:29.697875 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:07:29 crc kubenswrapper[4895]: I1206 07:07:29.697967 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:07:29 crc kubenswrapper[4895]: I1206 07:07:29.698742 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f271a8fe75022dffff5e89d9b9a496adc91c0889bc010b26e18a8f177f7c74a"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:07:29 crc kubenswrapper[4895]: I1206 07:07:29.698817 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://1f271a8fe75022dffff5e89d9b9a496adc91c0889bc010b26e18a8f177f7c74a" gracePeriod=600 Dec 06 07:07:30 crc kubenswrapper[4895]: I1206 07:07:30.106447 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="1f271a8fe75022dffff5e89d9b9a496adc91c0889bc010b26e18a8f177f7c74a" exitCode=0 Dec 06 07:07:30 crc kubenswrapper[4895]: I1206 07:07:30.106530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"1f271a8fe75022dffff5e89d9b9a496adc91c0889bc010b26e18a8f177f7c74a"} Dec 06 07:07:30 crc kubenswrapper[4895]: I1206 07:07:30.106893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"beb7c75c8613c3c3c8bdbc63b9af6008f8e0ac1c24d0786d912c67af3bdf0af4"} Dec 06 07:07:30 crc kubenswrapper[4895]: I1206 07:07:30.106916 4895 scope.go:117] "RemoveContainer" containerID="c18110f799eaeab538c5a5c8ddde83eb9d7b4f8f79d73523c855405598e83192" Dec 06 07:09:29 crc kubenswrapper[4895]: I1206 07:09:29.696071 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:09:29 crc kubenswrapper[4895]: I1206 07:09:29.696756 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:09:58 crc kubenswrapper[4895]: I1206 07:09:58.163062 4895 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 07:09:59 crc kubenswrapper[4895]: I1206 07:09:59.696929 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:09:59 crc kubenswrapper[4895]: I1206 07:09:59.697249 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:10:29 crc kubenswrapper[4895]: I1206 07:10:29.695971 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:10:29 crc kubenswrapper[4895]: I1206 07:10:29.696544 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:10:29 crc kubenswrapper[4895]: I1206 07:10:29.696587 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:10:29 crc kubenswrapper[4895]: I1206 07:10:29.697123 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"beb7c75c8613c3c3c8bdbc63b9af6008f8e0ac1c24d0786d912c67af3bdf0af4"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:10:29 crc kubenswrapper[4895]: I1206 07:10:29.697172 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://beb7c75c8613c3c3c8bdbc63b9af6008f8e0ac1c24d0786d912c67af3bdf0af4" gracePeriod=600 Dec 06 07:10:30 crc kubenswrapper[4895]: I1206 07:10:30.182346 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="beb7c75c8613c3c3c8bdbc63b9af6008f8e0ac1c24d0786d912c67af3bdf0af4" exitCode=0 Dec 06 07:10:30 crc kubenswrapper[4895]: I1206 07:10:30.182443 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"beb7c75c8613c3c3c8bdbc63b9af6008f8e0ac1c24d0786d912c67af3bdf0af4"} Dec 06 07:10:30 crc kubenswrapper[4895]: I1206 07:10:30.182900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"0fca2ab370dac1142fa441cec2ee41930eac4b7f4cc0496ffb43ffe8ce0a4b9a"} Dec 06 07:10:30 crc kubenswrapper[4895]: I1206 07:10:30.182923 4895 scope.go:117] "RemoveContainer" containerID="1f271a8fe75022dffff5e89d9b9a496adc91c0889bc010b26e18a8f177f7c74a" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.147163 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-72xf6"] Dec 06 07:10:55 crc kubenswrapper[4895]: E1206 07:10:55.147989 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c669ea24-666a-4152-9a1e-43f614bf8e21" containerName="registry" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.148009 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c669ea24-666a-4152-9a1e-43f614bf8e21" containerName="registry" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.148159 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c669ea24-666a-4152-9a1e-43f614bf8e21" containerName="registry" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.149355 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.162401 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-72xf6"] Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.305770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-utilities\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.306044 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcsv2\" (UniqueName: \"kubernetes.io/projected/66cddc96-df7a-40d6-907c-b77e5a5cae15-kube-api-access-gcsv2\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.306389 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-catalog-content\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.407140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-catalog-content\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.407194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-utilities\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.407216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcsv2\" (UniqueName: \"kubernetes.io/projected/66cddc96-df7a-40d6-907c-b77e5a5cae15-kube-api-access-gcsv2\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.407930 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-catalog-content\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.407968 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-utilities\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.429114 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcsv2\" (UniqueName: \"kubernetes.io/projected/66cddc96-df7a-40d6-907c-b77e5a5cae15-kube-api-access-gcsv2\") pod \"redhat-marketplace-72xf6\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.468213 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:10:55 crc kubenswrapper[4895]: I1206 07:10:55.662089 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-72xf6"] Dec 06 07:10:56 crc kubenswrapper[4895]: I1206 07:10:56.331985 4895 generic.go:334] "Generic (PLEG): container finished" podID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerID="8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd" exitCode=0 Dec 06 07:10:56 crc kubenswrapper[4895]: I1206 07:10:56.332029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72xf6" event={"ID":"66cddc96-df7a-40d6-907c-b77e5a5cae15","Type":"ContainerDied","Data":"8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd"} Dec 06 07:10:56 crc kubenswrapper[4895]: I1206 07:10:56.332317 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72xf6" event={"ID":"66cddc96-df7a-40d6-907c-b77e5a5cae15","Type":"ContainerStarted","Data":"dc26aa983c497c32189a80be4b00c6c6e5021686381c36085287fb2ce0a2e461"} Dec 06 07:10:56 crc kubenswrapper[4895]: I1206 07:10:56.333370 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:10:58 crc kubenswrapper[4895]: I1206 07:10:58.346962 4895 generic.go:334] "Generic (PLEG): container finished" podID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerID="0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0" exitCode=0 Dec 06 07:10:58 crc kubenswrapper[4895]: I1206 07:10:58.347092 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72xf6" event={"ID":"66cddc96-df7a-40d6-907c-b77e5a5cae15","Type":"ContainerDied","Data":"0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0"} Dec 06 07:10:59 crc kubenswrapper[4895]: I1206 07:10:59.357141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72xf6" event={"ID":"66cddc96-df7a-40d6-907c-b77e5a5cae15","Type":"ContainerStarted","Data":"8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655"} Dec 06 07:10:59 crc kubenswrapper[4895]: I1206 07:10:59.379510 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-72xf6" podStartSLOduration=1.934566491 podStartE2EDuration="4.379461769s" podCreationTimestamp="2025-12-06 07:10:55 +0000 UTC" firstStartedPulling="2025-12-06 07:10:56.333090099 +0000 UTC m=+818.734478969" lastFinishedPulling="2025-12-06 07:10:58.777985367 +0000 UTC m=+821.179374247" observedRunningTime="2025-12-06 07:10:59.375121272 +0000 UTC m=+821.776510152" watchObservedRunningTime="2025-12-06 07:10:59.379461769 +0000 UTC m=+821.780850659" Dec 06 07:11:05 crc kubenswrapper[4895]: I1206 07:11:05.469227 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:11:05 crc kubenswrapper[4895]: I1206 07:11:05.469795 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:11:05 crc kubenswrapper[4895]: I1206 07:11:05.515310 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:11:06 crc kubenswrapper[4895]: I1206 07:11:06.461958 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:11:06 crc kubenswrapper[4895]: I1206 07:11:06.509209 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-72xf6"] Dec 06 07:11:08 crc kubenswrapper[4895]: I1206 07:11:08.418452 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-72xf6" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="registry-server" containerID="cri-o://8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655" gracePeriod=2 Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.257879 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.372422 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-utilities\") pod \"66cddc96-df7a-40d6-907c-b77e5a5cae15\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.372514 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-catalog-content\") pod \"66cddc96-df7a-40d6-907c-b77e5a5cae15\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.372560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcsv2\" (UniqueName: \"kubernetes.io/projected/66cddc96-df7a-40d6-907c-b77e5a5cae15-kube-api-access-gcsv2\") pod \"66cddc96-df7a-40d6-907c-b77e5a5cae15\" (UID: \"66cddc96-df7a-40d6-907c-b77e5a5cae15\") " Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.373345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-utilities" (OuterVolumeSpecName: "utilities") pod "66cddc96-df7a-40d6-907c-b77e5a5cae15" (UID: "66cddc96-df7a-40d6-907c-b77e5a5cae15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.379501 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66cddc96-df7a-40d6-907c-b77e5a5cae15-kube-api-access-gcsv2" (OuterVolumeSpecName: "kube-api-access-gcsv2") pod "66cddc96-df7a-40d6-907c-b77e5a5cae15" (UID: "66cddc96-df7a-40d6-907c-b77e5a5cae15"). InnerVolumeSpecName "kube-api-access-gcsv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.389925 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66cddc96-df7a-40d6-907c-b77e5a5cae15" (UID: "66cddc96-df7a-40d6-907c-b77e5a5cae15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.425495 4895 generic.go:334] "Generic (PLEG): container finished" podID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerID="8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655" exitCode=0 Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.425539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72xf6" event={"ID":"66cddc96-df7a-40d6-907c-b77e5a5cae15","Type":"ContainerDied","Data":"8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655"} Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.425571 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72xf6" event={"ID":"66cddc96-df7a-40d6-907c-b77e5a5cae15","Type":"ContainerDied","Data":"dc26aa983c497c32189a80be4b00c6c6e5021686381c36085287fb2ce0a2e461"} Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.425590 4895 scope.go:117] "RemoveContainer" containerID="8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.425612 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72xf6" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.449861 4895 scope.go:117] "RemoveContainer" containerID="0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.455198 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-72xf6"] Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.458643 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-72xf6"] Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.474573 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.475178 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cddc96-df7a-40d6-907c-b77e5a5cae15-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.475001 4895 scope.go:117] "RemoveContainer" containerID="8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.475194 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcsv2\" (UniqueName: \"kubernetes.io/projected/66cddc96-df7a-40d6-907c-b77e5a5cae15-kube-api-access-gcsv2\") on node \"crc\" DevicePath \"\"" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.489412 4895 scope.go:117] "RemoveContainer" containerID="8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655" Dec 06 07:11:09 crc kubenswrapper[4895]: E1206 07:11:09.489885 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655\": container with ID starting with 8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655 not found: ID does not exist" containerID="8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.489916 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655"} err="failed to get container status \"8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655\": rpc error: code = NotFound desc = could not find container \"8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655\": container with ID starting with 8d4433bd49654656ebbb71842e7dec6b857e7857dcb971431ad8b201e2390655 not found: ID does not exist" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.489941 4895 scope.go:117] "RemoveContainer" containerID="0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0" Dec 06 07:11:09 crc kubenswrapper[4895]: E1206 07:11:09.490425 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0\": container with ID starting with 0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0 not found: ID does not exist" containerID="0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.490541 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0"} err="failed to get container status \"0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0\": rpc error: code = NotFound desc = could not find container \"0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0\": container with ID starting with 0eab205a4c5e61abb14127aa1d04159d09d374945befaaf174fffb43a2c3e4c0 not found: ID does not exist" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.490629 4895 scope.go:117] "RemoveContainer" containerID="8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd" Dec 06 07:11:09 crc kubenswrapper[4895]: E1206 07:11:09.491022 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd\": container with ID starting with 8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd not found: ID does not exist" containerID="8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd" Dec 06 07:11:09 crc kubenswrapper[4895]: I1206 07:11:09.491047 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd"} err="failed to get container status \"8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd\": rpc error: code = NotFound desc = could not find container \"8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd\": container with ID starting with 8606d61f45602b403e554c982736718ab588b4b424c858794cf982423411b3bd not found: ID does not exist" Dec 06 07:11:10 crc kubenswrapper[4895]: I1206 07:11:10.056886 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" path="/var/lib/kubelet/pods/66cddc96-df7a-40d6-907c-b77e5a5cae15/volumes" Dec 06 07:12:29 crc kubenswrapper[4895]: I1206 07:12:29.696072 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:12:29 crc kubenswrapper[4895]: I1206 07:12:29.696882 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:12:59 crc kubenswrapper[4895]: I1206 07:12:59.696437 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:12:59 crc kubenswrapper[4895]: I1206 07:12:59.697196 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:13:29 crc kubenswrapper[4895]: I1206 07:13:29.696022 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:13:29 crc kubenswrapper[4895]: I1206 07:13:29.697681 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:13:29 crc kubenswrapper[4895]: I1206 07:13:29.697793 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:13:29 crc kubenswrapper[4895]: I1206 07:13:29.698396 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fca2ab370dac1142fa441cec2ee41930eac4b7f4cc0496ffb43ffe8ce0a4b9a"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:13:29 crc kubenswrapper[4895]: I1206 07:13:29.698572 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://0fca2ab370dac1142fa441cec2ee41930eac4b7f4cc0496ffb43ffe8ce0a4b9a" gracePeriod=600 Dec 06 07:13:30 crc kubenswrapper[4895]: I1206 07:13:30.334051 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="0fca2ab370dac1142fa441cec2ee41930eac4b7f4cc0496ffb43ffe8ce0a4b9a" exitCode=0 Dec 06 07:13:30 crc kubenswrapper[4895]: I1206 07:13:30.334270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"0fca2ab370dac1142fa441cec2ee41930eac4b7f4cc0496ffb43ffe8ce0a4b9a"} Dec 06 07:13:30 crc kubenswrapper[4895]: I1206 07:13:30.334728 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"663e0971fd031efe73576c4d0575b9ee19ff771a93759108ec40df72da6692c9"} Dec 06 07:13:30 crc kubenswrapper[4895]: I1206 07:13:30.334749 4895 scope.go:117] "RemoveContainer" containerID="beb7c75c8613c3c3c8bdbc63b9af6008f8e0ac1c24d0786d912c67af3bdf0af4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.175371 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mhcxk"] Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.175836 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-controller" containerID="cri-o://4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.176236 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="sbdb" containerID="cri-o://c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.176274 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="nbdb" containerID="cri-o://2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.176312 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="northd" containerID="cri-o://1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.176344 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.176370 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-node" containerID="cri-o://abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.176400 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-acl-logging" containerID="cri-o://3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.253705 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" containerID="cri-o://1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" gracePeriod=30 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.354413 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/3.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.357243 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovn-acl-logging/0.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.357911 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovn-controller/0.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358402 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" exitCode=0 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358426 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" exitCode=0 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358433 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" exitCode=143 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358440 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" exitCode=143 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358488 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f"} Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358563 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975"} Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358581 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b"} Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.358594 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050"} Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.368393 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/2.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.369042 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/1.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.369116 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1f42fc6-54ce-4f49-adbd-545e02a1f322" containerID="5a1fa872656607f4f2f6459bef2c8d3fbd88222220f1eb200e4487d2fcca1c2c" exitCode=2 Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.369155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerDied","Data":"5a1fa872656607f4f2f6459bef2c8d3fbd88222220f1eb200e4487d2fcca1c2c"} Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.369192 4895 scope.go:117] "RemoveContainer" containerID="b39bc82b9c81dd77f354fc01d26f23e263e0bc9145abd83e8b53550b2495c785" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.369631 4895 scope.go:117] "RemoveContainer" containerID="5a1fa872656607f4f2f6459bef2c8d3fbd88222220f1eb200e4487d2fcca1c2c" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.563113 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/3.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.566181 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovn-acl-logging/0.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.566763 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovn-controller/0.log" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.567201 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615329 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5rjb4"] Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615555 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="sbdb" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615569 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="sbdb" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615579 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="extract-utilities" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615586 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="extract-utilities" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615596 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="registry-server" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615602 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="registry-server" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615610 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="nbdb" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615616 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="nbdb" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615624 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615630 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615639 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-node" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615645 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-node" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615651 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615657 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615666 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kubecfg-setup" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615671 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kubecfg-setup" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615678 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="northd" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615684 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="northd" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615691 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615697 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615706 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615712 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615718 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615725 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615734 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615741 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615755 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="extract-content" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615762 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="extract-content" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.615771 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-acl-logging" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615779 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-acl-logging" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615897 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615908 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-acl-logging" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615916 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="66cddc96-df7a-40d6-907c-b77e5a5cae15" containerName="registry-server" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615923 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615931 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-node" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615939 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615947 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovn-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615957 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615965 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="nbdb" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615976 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="northd" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615984 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.615995 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="sbdb" Dec 06 07:13:31 crc kubenswrapper[4895]: E1206 07:13:31.616077 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.616085 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.616558 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9690808-de36-4960-8286-7079c78c491b" containerName="ovnkube-controller" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.618209 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671069 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-bin\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671129 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-openvswitch\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671157 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-node-log\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671161 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671200 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-netns\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671205 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671232 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-node-log" (OuterVolumeSpecName: "node-log") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671233 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9690808-de36-4960-8286-7079c78c491b-ovn-node-metrics-cert\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671253 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671256 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-etc-openvswitch\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671283 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-log-socket\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671306 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-env-overrides\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671332 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-slash\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-log-socket" (OuterVolumeSpecName: "log-socket") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671355 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-netd\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671362 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-systemd\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671444 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-script-lib\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671474 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-kubelet\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671526 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-var-lib-openvswitch\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671550 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-config\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671573 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkc5g\" (UniqueName: \"kubernetes.io/projected/c9690808-de36-4960-8286-7079c78c491b-kube-api-access-lkc5g\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671624 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-ovn\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-ovn-kubernetes\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671677 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-systemd-units\") pod \"c9690808-de36-4960-8286-7079c78c491b\" (UID: \"c9690808-de36-4960-8286-7079c78c491b\") " Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671808 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671819 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671837 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-slash" (OuterVolumeSpecName: "host-slash") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.671854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672192 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672215 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672247 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672516 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672618 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672662 4895 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672678 4895 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672688 4895 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672701 4895 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672713 4895 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672723 4895 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-node-log\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672734 4895 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672744 4895 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672752 4895 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-log-socket\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672759 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672767 4895 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-slash\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672774 4895 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672782 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672790 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.672798 4895 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.676665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9690808-de36-4960-8286-7079c78c491b-kube-api-access-lkc5g" (OuterVolumeSpecName: "kube-api-access-lkc5g") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "kube-api-access-lkc5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.676924 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9690808-de36-4960-8286-7079c78c491b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.684933 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c9690808-de36-4960-8286-7079c78c491b" (UID: "c9690808-de36-4960-8286-7079c78c491b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774222 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-ovn\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774287 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-cni-bin\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774345 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-slash\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774377 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-etc-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-log-socket\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774436 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-var-lib-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774460 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-systemd-units\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovnkube-script-lib\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774538 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-cni-netd\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovnkube-config\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774592 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-systemd\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774617 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6w6\" (UniqueName: \"kubernetes.io/projected/e4c00025-5170-47a8-9f4b-27f69f4ebee1-kube-api-access-lk6w6\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774636 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-kubelet\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774657 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-env-overrides\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-run-netns\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-run-ovn-kubernetes\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774738 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-node-log\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774769 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovn-node-metrics-cert\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774805 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774958 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9690808-de36-4960-8286-7079c78c491b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774972 4895 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774986 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkc5g\" (UniqueName: \"kubernetes.io/projected/c9690808-de36-4960-8286-7079c78c491b-kube-api-access-lkc5g\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.774999 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9690808-de36-4960-8286-7079c78c491b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.775011 4895 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9690808-de36-4960-8286-7079c78c491b-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.875872 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-run-netns\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.875756 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-run-netns\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.875998 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-run-ovn-kubernetes\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.876095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-run-ovn-kubernetes\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.876132 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-node-log\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.876158 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovn-node-metrics-cert\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.876280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-node-log\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877315 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-ovn\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877645 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-cni-bin\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877674 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877702 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-slash\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-etc-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877748 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-log-socket\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-cni-bin\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877766 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-var-lib-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-var-lib-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877797 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877845 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-etc-openvswitch\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877840 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-ovn\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877846 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-slash\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877875 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-log-socket\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-systemd-units\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877956 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-systemd-units\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.877983 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovnkube-script-lib\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-cni-netd\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878065 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovnkube-config\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878121 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-systemd\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878150 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6w6\" (UniqueName: \"kubernetes.io/projected/e4c00025-5170-47a8-9f4b-27f69f4ebee1-kube-api-access-lk6w6\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878170 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-kubelet\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-env-overrides\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-run-systemd\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-cni-netd\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878656 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e4c00025-5170-47a8-9f4b-27f69f4ebee1-host-kubelet\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878730 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-env-overrides\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.878798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovnkube-script-lib\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.879173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovnkube-config\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.881138 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4c00025-5170-47a8-9f4b-27f69f4ebee1-ovn-node-metrics-cert\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.896345 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6w6\" (UniqueName: \"kubernetes.io/projected/e4c00025-5170-47a8-9f4b-27f69f4ebee1-kube-api-access-lk6w6\") pod \"ovnkube-node-5rjb4\" (UID: \"e4c00025-5170-47a8-9f4b-27f69f4ebee1\") " pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: I1206 07:13:31.936496 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:31 crc kubenswrapper[4895]: W1206 07:13:31.957833 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4c00025_5170_47a8_9f4b_27f69f4ebee1.slice/crio-44fbfb843d00fce5a94101d64e29668d5cad00830a4ee5a80a91f1e3365850f9 WatchSource:0}: Error finding container 44fbfb843d00fce5a94101d64e29668d5cad00830a4ee5a80a91f1e3365850f9: Status 404 returned error can't find the container with id 44fbfb843d00fce5a94101d64e29668d5cad00830a4ee5a80a91f1e3365850f9 Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.376702 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovnkube-controller/3.log" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.378985 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovn-acl-logging/0.log" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379438 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhcxk_c9690808-de36-4960-8286-7079c78c491b/ovn-controller/0.log" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379823 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" exitCode=0 Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379865 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" exitCode=0 Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379880 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" exitCode=0 Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379897 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9690808-de36-4960-8286-7079c78c491b" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" exitCode=0 Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379920 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.380027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.379871 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.380077 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.380096 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.380144 4895 scope.go:117] "RemoveContainer" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.380231 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhcxk" event={"ID":"c9690808-de36-4960-8286-7079c78c491b","Type":"ContainerDied","Data":"c5a7c33f5aec2194e7988992c181ac9907aad630ffc997958c4ca923a372d11b"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.382124 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k86k4_e1f42fc6-54ce-4f49-adbd-545e02a1f322/kube-multus/2.log" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.382197 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k86k4" event={"ID":"e1f42fc6-54ce-4f49-adbd-545e02a1f322","Type":"ContainerStarted","Data":"370665f87c74d404c1cd0b48fd8801e5a395094222dbc780365dc6500ff1eeff"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.386517 4895 generic.go:334] "Generic (PLEG): container finished" podID="e4c00025-5170-47a8-9f4b-27f69f4ebee1" containerID="4da95d783825cb0e4773d2949d45a0865e7dcf125c3acb0910da1d9da41ae62c" exitCode=0 Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.386577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerDied","Data":"4da95d783825cb0e4773d2949d45a0865e7dcf125c3acb0910da1d9da41ae62c"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.386611 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"44fbfb843d00fce5a94101d64e29668d5cad00830a4ee5a80a91f1e3365850f9"} Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.398419 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.440338 4895 scope.go:117] "RemoveContainer" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.452407 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mhcxk"] Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.457356 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mhcxk"] Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.481191 4895 scope.go:117] "RemoveContainer" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.500115 4895 scope.go:117] "RemoveContainer" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.527515 4895 scope.go:117] "RemoveContainer" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.542915 4895 scope.go:117] "RemoveContainer" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.565323 4895 scope.go:117] "RemoveContainer" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.584558 4895 scope.go:117] "RemoveContainer" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.601015 4895 scope.go:117] "RemoveContainer" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.635710 4895 scope.go:117] "RemoveContainer" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.638995 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": container with ID starting with 1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0 not found: ID does not exist" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.639043 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0"} err="failed to get container status \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": rpc error: code = NotFound desc = could not find container \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": container with ID starting with 1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.639076 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.639418 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": container with ID starting with adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75 not found: ID does not exist" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.639458 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75"} err="failed to get container status \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": rpc error: code = NotFound desc = could not find container \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": container with ID starting with adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.639519 4895 scope.go:117] "RemoveContainer" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.639802 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": container with ID starting with c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83 not found: ID does not exist" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.639829 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83"} err="failed to get container status \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": rpc error: code = NotFound desc = could not find container \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": container with ID starting with c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.639844 4895 scope.go:117] "RemoveContainer" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.640073 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": container with ID starting with 2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc not found: ID does not exist" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.640103 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc"} err="failed to get container status \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": rpc error: code = NotFound desc = could not find container \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": container with ID starting with 2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.640120 4895 scope.go:117] "RemoveContainer" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.640360 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": container with ID starting with 1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368 not found: ID does not exist" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.640382 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368"} err="failed to get container status \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": rpc error: code = NotFound desc = could not find container \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": container with ID starting with 1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.640395 4895 scope.go:117] "RemoveContainer" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.641294 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": container with ID starting with 34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f not found: ID does not exist" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.641346 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f"} err="failed to get container status \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": rpc error: code = NotFound desc = could not find container \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": container with ID starting with 34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.641366 4895 scope.go:117] "RemoveContainer" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.641877 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": container with ID starting with abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975 not found: ID does not exist" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.641971 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975"} err="failed to get container status \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": rpc error: code = NotFound desc = could not find container \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": container with ID starting with abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.641996 4895 scope.go:117] "RemoveContainer" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.642250 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": container with ID starting with 3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b not found: ID does not exist" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.642284 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b"} err="failed to get container status \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": rpc error: code = NotFound desc = could not find container \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": container with ID starting with 3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.642307 4895 scope.go:117] "RemoveContainer" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.642575 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": container with ID starting with 4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050 not found: ID does not exist" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.642601 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050"} err="failed to get container status \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": rpc error: code = NotFound desc = could not find container \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": container with ID starting with 4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.642615 4895 scope.go:117] "RemoveContainer" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" Dec 06 07:13:32 crc kubenswrapper[4895]: E1206 07:13:32.642847 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": container with ID starting with 9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56 not found: ID does not exist" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.642878 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56"} err="failed to get container status \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": rpc error: code = NotFound desc = could not find container \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": container with ID starting with 9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.642898 4895 scope.go:117] "RemoveContainer" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.643186 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0"} err="failed to get container status \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": rpc error: code = NotFound desc = could not find container \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": container with ID starting with 1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.643211 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.643779 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75"} err="failed to get container status \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": rpc error: code = NotFound desc = could not find container \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": container with ID starting with adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.643797 4895 scope.go:117] "RemoveContainer" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.644073 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83"} err="failed to get container status \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": rpc error: code = NotFound desc = could not find container \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": container with ID starting with c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.644095 4895 scope.go:117] "RemoveContainer" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.644351 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc"} err="failed to get container status \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": rpc error: code = NotFound desc = could not find container \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": container with ID starting with 2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.644375 4895 scope.go:117] "RemoveContainer" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.645305 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368"} err="failed to get container status \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": rpc error: code = NotFound desc = could not find container \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": container with ID starting with 1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.645356 4895 scope.go:117] "RemoveContainer" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.645638 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f"} err="failed to get container status \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": rpc error: code = NotFound desc = could not find container \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": container with ID starting with 34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.645662 4895 scope.go:117] "RemoveContainer" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.645894 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975"} err="failed to get container status \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": rpc error: code = NotFound desc = could not find container \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": container with ID starting with abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.645923 4895 scope.go:117] "RemoveContainer" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.646256 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b"} err="failed to get container status \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": rpc error: code = NotFound desc = could not find container \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": container with ID starting with 3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.646285 4895 scope.go:117] "RemoveContainer" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.646690 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050"} err="failed to get container status \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": rpc error: code = NotFound desc = could not find container \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": container with ID starting with 4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.646718 4895 scope.go:117] "RemoveContainer" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.646987 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56"} err="failed to get container status \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": rpc error: code = NotFound desc = could not find container \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": container with ID starting with 9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.647010 4895 scope.go:117] "RemoveContainer" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.647265 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0"} err="failed to get container status \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": rpc error: code = NotFound desc = could not find container \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": container with ID starting with 1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.647287 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.648010 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75"} err="failed to get container status \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": rpc error: code = NotFound desc = could not find container \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": container with ID starting with adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.648034 4895 scope.go:117] "RemoveContainer" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.648846 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83"} err="failed to get container status \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": rpc error: code = NotFound desc = could not find container \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": container with ID starting with c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.649061 4895 scope.go:117] "RemoveContainer" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.649421 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc"} err="failed to get container status \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": rpc error: code = NotFound desc = could not find container \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": container with ID starting with 2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.649446 4895 scope.go:117] "RemoveContainer" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.649843 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368"} err="failed to get container status \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": rpc error: code = NotFound desc = could not find container \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": container with ID starting with 1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.649870 4895 scope.go:117] "RemoveContainer" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.650347 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f"} err="failed to get container status \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": rpc error: code = NotFound desc = could not find container \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": container with ID starting with 34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.650374 4895 scope.go:117] "RemoveContainer" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.650888 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975"} err="failed to get container status \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": rpc error: code = NotFound desc = could not find container \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": container with ID starting with abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.650908 4895 scope.go:117] "RemoveContainer" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.651235 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b"} err="failed to get container status \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": rpc error: code = NotFound desc = could not find container \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": container with ID starting with 3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.651287 4895 scope.go:117] "RemoveContainer" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.651717 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050"} err="failed to get container status \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": rpc error: code = NotFound desc = could not find container \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": container with ID starting with 4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.651742 4895 scope.go:117] "RemoveContainer" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.652019 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56"} err="failed to get container status \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": rpc error: code = NotFound desc = could not find container \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": container with ID starting with 9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.652047 4895 scope.go:117] "RemoveContainer" containerID="1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.652304 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0"} err="failed to get container status \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": rpc error: code = NotFound desc = could not find container \"1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0\": container with ID starting with 1d01660ca2a2ddeea4dddbba375ee865a784cca0fda87e98ed2045a1e6d3fce0 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.652326 4895 scope.go:117] "RemoveContainer" containerID="adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.652635 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75"} err="failed to get container status \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": rpc error: code = NotFound desc = could not find container \"adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75\": container with ID starting with adb77e8768175534bab85528f93c114460a03f195695f6acb556c02e01981e75 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.652662 4895 scope.go:117] "RemoveContainer" containerID="c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.653004 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83"} err="failed to get container status \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": rpc error: code = NotFound desc = could not find container \"c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83\": container with ID starting with c708de144c112e6bd7cac6638d1073656fb82425ae2e05f9c9e316c848e5fd83 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.653058 4895 scope.go:117] "RemoveContainer" containerID="2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.653750 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc"} err="failed to get container status \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": rpc error: code = NotFound desc = could not find container \"2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc\": container with ID starting with 2d9535e58a6bcaf10c85c505f24affacf451c4043e51e00c0487fe0352ebcafc not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.653774 4895 scope.go:117] "RemoveContainer" containerID="1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.654354 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368"} err="failed to get container status \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": rpc error: code = NotFound desc = could not find container \"1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368\": container with ID starting with 1db13390fedb8bc4c74140a8a5ac935c53fc7c9d3186c1a3a3ff66c8a8266368 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.654386 4895 scope.go:117] "RemoveContainer" containerID="34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.654670 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f"} err="failed to get container status \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": rpc error: code = NotFound desc = could not find container \"34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f\": container with ID starting with 34e9ba9aa150cb110792359981df2242706bf1c143f7f53fe69b976033748e0f not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.654697 4895 scope.go:117] "RemoveContainer" containerID="abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.654966 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975"} err="failed to get container status \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": rpc error: code = NotFound desc = could not find container \"abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975\": container with ID starting with abe66bef14ef75743b7b3e9cdba972c99706448c23efe3ec1b064cba0b96b975 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.655033 4895 scope.go:117] "RemoveContainer" containerID="3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.655284 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b"} err="failed to get container status \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": rpc error: code = NotFound desc = could not find container \"3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b\": container with ID starting with 3f4d288edc9723440ea8ad011e7e4a5cc4ea1d5b1b137f4d7d5ceb7e607b396b not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.655311 4895 scope.go:117] "RemoveContainer" containerID="4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.655653 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050"} err="failed to get container status \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": rpc error: code = NotFound desc = could not find container \"4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050\": container with ID starting with 4b3ebeec4167b3eb375cc2f02b51e7ac283d68d0b04d65f2509d20a5aa255050 not found: ID does not exist" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.655678 4895 scope.go:117] "RemoveContainer" containerID="9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56" Dec 06 07:13:32 crc kubenswrapper[4895]: I1206 07:13:32.655885 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56"} err="failed to get container status \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": rpc error: code = NotFound desc = could not find container \"9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56\": container with ID starting with 9c40f64f3c46555525d80545545285f0d673f4755f6b9982d364e2e2b10fdb56 not found: ID does not exist" Dec 06 07:13:33 crc kubenswrapper[4895]: I1206 07:13:33.399143 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"508833a8bc50ba5457aba2e73c00906d31cf3b3ed71d40e315e78f0cff042d40"} Dec 06 07:13:33 crc kubenswrapper[4895]: I1206 07:13:33.399648 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"3b55b97c77985daf50e89e5fc21a9fea1029aad9706be8091e0c19fbd9412026"} Dec 06 07:13:33 crc kubenswrapper[4895]: I1206 07:13:33.399676 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"68af51004574ec0ccdfc1fff0d972413773a741f78b2c47d8777ce4d629e2abc"} Dec 06 07:13:33 crc kubenswrapper[4895]: I1206 07:13:33.399690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"fe060e15abfc1cc013cf1bc8dc81a63a8a41661982438e03f7f0f2618ce1a48e"} Dec 06 07:13:33 crc kubenswrapper[4895]: I1206 07:13:33.399701 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"a27b606ab7aca44920ec6baf38a04cfa5abfe7bc88f86760643225434cec7556"} Dec 06 07:13:33 crc kubenswrapper[4895]: I1206 07:13:33.399712 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"817dbfe9edadf17b74a10aa5d798b2223a05cc95367fe36e56dea9cb8a57ab22"} Dec 06 07:13:34 crc kubenswrapper[4895]: I1206 07:13:34.061987 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9690808-de36-4960-8286-7079c78c491b" path="/var/lib/kubelet/pods/c9690808-de36-4960-8286-7079c78c491b/volumes" Dec 06 07:13:35 crc kubenswrapper[4895]: I1206 07:13:35.414697 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"9974d9fa77a8190f4c8f9d4ecd8cb44ad3cf85ba97045fcac2ab7d10226d7cd7"} Dec 06 07:13:39 crc kubenswrapper[4895]: I1206 07:13:39.440897 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" event={"ID":"e4c00025-5170-47a8-9f4b-27f69f4ebee1","Type":"ContainerStarted","Data":"33be3784cd4d85c8ae26129ea3e376794f065efc5f3f8b7eb3a91b511666f50c"} Dec 06 07:13:40 crc kubenswrapper[4895]: I1206 07:13:40.445695 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:40 crc kubenswrapper[4895]: I1206 07:13:40.471892 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:40 crc kubenswrapper[4895]: I1206 07:13:40.477372 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" podStartSLOduration=9.477352787 podStartE2EDuration="9.477352787s" podCreationTimestamp="2025-12-06 07:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:13:40.471863279 +0000 UTC m=+982.873252159" watchObservedRunningTime="2025-12-06 07:13:40.477352787 +0000 UTC m=+982.878741657" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.451068 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.451397 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.477043 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.633739 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-ph9wt"] Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.634390 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.636520 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.637420 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.637546 4895 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nsgz7" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.637553 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.644156 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ph9wt"] Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.737743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/710d7eb5-2fea-47f4-9ed1-954731bda21a-crc-storage\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.737818 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvz2\" (UniqueName: \"kubernetes.io/projected/710d7eb5-2fea-47f4-9ed1-954731bda21a-kube-api-access-ngvz2\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.737924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/710d7eb5-2fea-47f4-9ed1-954731bda21a-node-mnt\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.839305 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/710d7eb5-2fea-47f4-9ed1-954731bda21a-crc-storage\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.839388 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvz2\" (UniqueName: \"kubernetes.io/projected/710d7eb5-2fea-47f4-9ed1-954731bda21a-kube-api-access-ngvz2\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.839420 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/710d7eb5-2fea-47f4-9ed1-954731bda21a-node-mnt\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.839679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/710d7eb5-2fea-47f4-9ed1-954731bda21a-node-mnt\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.840078 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/710d7eb5-2fea-47f4-9ed1-954731bda21a-crc-storage\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.859849 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvz2\" (UniqueName: \"kubernetes.io/projected/710d7eb5-2fea-47f4-9ed1-954731bda21a-kube-api-access-ngvz2\") pod \"crc-storage-crc-ph9wt\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: I1206 07:13:41.949728 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: E1206 07:13:41.982588 4895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ph9wt_crc-storage_710d7eb5-2fea-47f4-9ed1-954731bda21a_0(4265d9f1a42e66f3e8746cb36d3b2156ecb7cb0bbf928ad3655506eff1e9afae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 07:13:41 crc kubenswrapper[4895]: E1206 07:13:41.982917 4895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ph9wt_crc-storage_710d7eb5-2fea-47f4-9ed1-954731bda21a_0(4265d9f1a42e66f3e8746cb36d3b2156ecb7cb0bbf928ad3655506eff1e9afae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: E1206 07:13:41.982939 4895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ph9wt_crc-storage_710d7eb5-2fea-47f4-9ed1-954731bda21a_0(4265d9f1a42e66f3e8746cb36d3b2156ecb7cb0bbf928ad3655506eff1e9afae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:41 crc kubenswrapper[4895]: E1206 07:13:41.982984 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-ph9wt_crc-storage(710d7eb5-2fea-47f4-9ed1-954731bda21a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-ph9wt_crc-storage(710d7eb5-2fea-47f4-9ed1-954731bda21a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ph9wt_crc-storage_710d7eb5-2fea-47f4-9ed1-954731bda21a_0(4265d9f1a42e66f3e8746cb36d3b2156ecb7cb0bbf928ad3655506eff1e9afae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-ph9wt" podUID="710d7eb5-2fea-47f4-9ed1-954731bda21a" Dec 06 07:13:42 crc kubenswrapper[4895]: I1206 07:13:42.455464 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:42 crc kubenswrapper[4895]: I1206 07:13:42.455899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:42 crc kubenswrapper[4895]: I1206 07:13:42.670189 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ph9wt"] Dec 06 07:13:42 crc kubenswrapper[4895]: W1206 07:13:42.675102 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod710d7eb5_2fea_47f4_9ed1_954731bda21a.slice/crio-e6a763bfbc0251877f820c491f9d1703ba1c61b98cbf8dc7f7a434855bd6965d WatchSource:0}: Error finding container e6a763bfbc0251877f820c491f9d1703ba1c61b98cbf8dc7f7a434855bd6965d: Status 404 returned error can't find the container with id e6a763bfbc0251877f820c491f9d1703ba1c61b98cbf8dc7f7a434855bd6965d Dec 06 07:13:43 crc kubenswrapper[4895]: I1206 07:13:43.461550 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ph9wt" event={"ID":"710d7eb5-2fea-47f4-9ed1-954731bda21a","Type":"ContainerStarted","Data":"e6a763bfbc0251877f820c491f9d1703ba1c61b98cbf8dc7f7a434855bd6965d"} Dec 06 07:13:43 crc kubenswrapper[4895]: I1206 07:13:43.479920 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5rjb4" Dec 06 07:13:45 crc kubenswrapper[4895]: I1206 07:13:45.473005 4895 generic.go:334] "Generic (PLEG): container finished" podID="710d7eb5-2fea-47f4-9ed1-954731bda21a" containerID="ef561be9e9d7bd59c84adeee5c6906541c78ecc9cce4b63ee4f5c7fbf36f5ad4" exitCode=0 Dec 06 07:13:45 crc kubenswrapper[4895]: I1206 07:13:45.473071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ph9wt" event={"ID":"710d7eb5-2fea-47f4-9ed1-954731bda21a","Type":"ContainerDied","Data":"ef561be9e9d7bd59c84adeee5c6906541c78ecc9cce4b63ee4f5c7fbf36f5ad4"} Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.687651 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.800175 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/710d7eb5-2fea-47f4-9ed1-954731bda21a-crc-storage\") pod \"710d7eb5-2fea-47f4-9ed1-954731bda21a\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.800307 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/710d7eb5-2fea-47f4-9ed1-954731bda21a-node-mnt\") pod \"710d7eb5-2fea-47f4-9ed1-954731bda21a\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.800368 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvz2\" (UniqueName: \"kubernetes.io/projected/710d7eb5-2fea-47f4-9ed1-954731bda21a-kube-api-access-ngvz2\") pod \"710d7eb5-2fea-47f4-9ed1-954731bda21a\" (UID: \"710d7eb5-2fea-47f4-9ed1-954731bda21a\") " Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.800368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/710d7eb5-2fea-47f4-9ed1-954731bda21a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "710d7eb5-2fea-47f4-9ed1-954731bda21a" (UID: "710d7eb5-2fea-47f4-9ed1-954731bda21a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.800636 4895 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/710d7eb5-2fea-47f4-9ed1-954731bda21a-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.804814 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710d7eb5-2fea-47f4-9ed1-954731bda21a-kube-api-access-ngvz2" (OuterVolumeSpecName: "kube-api-access-ngvz2") pod "710d7eb5-2fea-47f4-9ed1-954731bda21a" (UID: "710d7eb5-2fea-47f4-9ed1-954731bda21a"). InnerVolumeSpecName "kube-api-access-ngvz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.812514 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710d7eb5-2fea-47f4-9ed1-954731bda21a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "710d7eb5-2fea-47f4-9ed1-954731bda21a" (UID: "710d7eb5-2fea-47f4-9ed1-954731bda21a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.902119 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvz2\" (UniqueName: \"kubernetes.io/projected/710d7eb5-2fea-47f4-9ed1-954731bda21a-kube-api-access-ngvz2\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:46 crc kubenswrapper[4895]: I1206 07:13:46.902159 4895 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/710d7eb5-2fea-47f4-9ed1-954731bda21a-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:47 crc kubenswrapper[4895]: I1206 07:13:47.486430 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ph9wt" event={"ID":"710d7eb5-2fea-47f4-9ed1-954731bda21a","Type":"ContainerDied","Data":"e6a763bfbc0251877f820c491f9d1703ba1c61b98cbf8dc7f7a434855bd6965d"} Dec 06 07:13:47 crc kubenswrapper[4895]: I1206 07:13:47.486506 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6a763bfbc0251877f820c491f9d1703ba1c61b98cbf8dc7f7a434855bd6965d" Dec 06 07:13:47 crc kubenswrapper[4895]: I1206 07:13:47.486506 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ph9wt" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.015796 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs"] Dec 06 07:13:54 crc kubenswrapper[4895]: E1206 07:13:54.016549 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710d7eb5-2fea-47f4-9ed1-954731bda21a" containerName="storage" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.016564 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="710d7eb5-2fea-47f4-9ed1-954731bda21a" containerName="storage" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.016682 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="710d7eb5-2fea-47f4-9ed1-954731bda21a" containerName="storage" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.017598 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.025258 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs"] Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.025420 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.102642 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.102741 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h6dt\" (UniqueName: \"kubernetes.io/projected/0dd66b27-5979-489a-8356-cb6d42b23c3a-kube-api-access-2h6dt\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.102800 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.203988 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.204063 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h6dt\" (UniqueName: \"kubernetes.io/projected/0dd66b27-5979-489a-8356-cb6d42b23c3a-kube-api-access-2h6dt\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.204100 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.205044 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.205139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.232557 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h6dt\" (UniqueName: \"kubernetes.io/projected/0dd66b27-5979-489a-8356-cb6d42b23c3a-kube-api-access-2h6dt\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.333111 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:54 crc kubenswrapper[4895]: I1206 07:13:54.761269 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs"] Dec 06 07:13:54 crc kubenswrapper[4895]: W1206 07:13:54.777633 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dd66b27_5979_489a_8356_cb6d42b23c3a.slice/crio-7ac441a02fb6bf3ddd762c7a857a1e169a922a79c19f1016bf0cc3db103661f3 WatchSource:0}: Error finding container 7ac441a02fb6bf3ddd762c7a857a1e169a922a79c19f1016bf0cc3db103661f3: Status 404 returned error can't find the container with id 7ac441a02fb6bf3ddd762c7a857a1e169a922a79c19f1016bf0cc3db103661f3 Dec 06 07:13:55 crc kubenswrapper[4895]: I1206 07:13:55.528611 4895 generic.go:334] "Generic (PLEG): container finished" podID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerID="46d9ecd1a080287984340189cb1b10dff807b0096947b86868b40b91f37dc6d6" exitCode=0 Dec 06 07:13:55 crc kubenswrapper[4895]: I1206 07:13:55.529010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" event={"ID":"0dd66b27-5979-489a-8356-cb6d42b23c3a","Type":"ContainerDied","Data":"46d9ecd1a080287984340189cb1b10dff807b0096947b86868b40b91f37dc6d6"} Dec 06 07:13:55 crc kubenswrapper[4895]: I1206 07:13:55.529040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" event={"ID":"0dd66b27-5979-489a-8356-cb6d42b23c3a","Type":"ContainerStarted","Data":"7ac441a02fb6bf3ddd762c7a857a1e169a922a79c19f1016bf0cc3db103661f3"} Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.804505 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvb25"] Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.809211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.822980 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvb25"] Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.835635 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgrz\" (UniqueName: \"kubernetes.io/projected/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-kube-api-access-2zgrz\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.835699 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-catalog-content\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.835734 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-utilities\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.936546 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-catalog-content\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.936908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-utilities\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.937052 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zgrz\" (UniqueName: \"kubernetes.io/projected/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-kube-api-access-2zgrz\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.937096 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-catalog-content\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.937325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-utilities\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:56 crc kubenswrapper[4895]: I1206 07:13:56.957323 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zgrz\" (UniqueName: \"kubernetes.io/projected/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-kube-api-access-2zgrz\") pod \"redhat-operators-rvb25\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.147878 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.342080 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvb25"] Dec 06 07:13:57 crc kubenswrapper[4895]: W1206 07:13:57.362712 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b751e8_0620_43e3_b2fb_f38e7eaa0ffc.slice/crio-af1bf57593a46b6605b897945afdd584ca073a6226bc4200c5bccdabf3e129bd WatchSource:0}: Error finding container af1bf57593a46b6605b897945afdd584ca073a6226bc4200c5bccdabf3e129bd: Status 404 returned error can't find the container with id af1bf57593a46b6605b897945afdd584ca073a6226bc4200c5bccdabf3e129bd Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.540711 4895 generic.go:334] "Generic (PLEG): container finished" podID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerID="6e0bf32e3ed4363033b3a7cc47ff919d682bafc81bdb946d676843a1fdacfef6" exitCode=0 Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.540797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerDied","Data":"6e0bf32e3ed4363033b3a7cc47ff919d682bafc81bdb946d676843a1fdacfef6"} Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.540830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerStarted","Data":"af1bf57593a46b6605b897945afdd584ca073a6226bc4200c5bccdabf3e129bd"} Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.542675 4895 generic.go:334] "Generic (PLEG): container finished" podID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerID="e1f2dc020550bb697a914bba50fab6bfa1f2efd3294ed170e463b1b10a7fd060" exitCode=0 Dec 06 07:13:57 crc kubenswrapper[4895]: I1206 07:13:57.542706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" event={"ID":"0dd66b27-5979-489a-8356-cb6d42b23c3a","Type":"ContainerDied","Data":"e1f2dc020550bb697a914bba50fab6bfa1f2efd3294ed170e463b1b10a7fd060"} Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.404121 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkdbw"] Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.405405 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.422146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkdbw"] Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.455969 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-catalog-content\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.456037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87b64\" (UniqueName: \"kubernetes.io/projected/e39444e8-27f7-4798-909f-c22ed5bc9384-kube-api-access-87b64\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.456172 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-utilities\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.550024 4895 generic.go:334] "Generic (PLEG): container finished" podID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerID="cf16104ab97bee8eb5100bc18bbdd08851abd7d7fdaa2d08dc40b4740e63f7e2" exitCode=0 Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.550092 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" event={"ID":"0dd66b27-5979-489a-8356-cb6d42b23c3a","Type":"ContainerDied","Data":"cf16104ab97bee8eb5100bc18bbdd08851abd7d7fdaa2d08dc40b4740e63f7e2"} Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.551714 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerStarted","Data":"7e9b32886768a2d4ac437897209d0e86eaf3c8002112e6cd99e5b6e1d8ec4f21"} Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.557744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-catalog-content\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.557829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87b64\" (UniqueName: \"kubernetes.io/projected/e39444e8-27f7-4798-909f-c22ed5bc9384-kube-api-access-87b64\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.557868 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-utilities\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.558200 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-catalog-content\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.558223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-utilities\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.585387 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87b64\" (UniqueName: \"kubernetes.io/projected/e39444e8-27f7-4798-909f-c22ed5bc9384-kube-api-access-87b64\") pod \"certified-operators-mkdbw\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.717813 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:13:58 crc kubenswrapper[4895]: I1206 07:13:58.973847 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkdbw"] Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.557449 4895 generic.go:334] "Generic (PLEG): container finished" podID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerID="cf9b6c395f826cfb9081be59bcd1b94dd59585b9d1b12a8b49259e4e5f4a579b" exitCode=0 Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.557521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerDied","Data":"cf9b6c395f826cfb9081be59bcd1b94dd59585b9d1b12a8b49259e4e5f4a579b"} Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.557855 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerStarted","Data":"3c0fbeddcc4d2be894952336befb0e09e46c6112ab8b03c51067538464f17735"} Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.559645 4895 generic.go:334] "Generic (PLEG): container finished" podID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerID="7e9b32886768a2d4ac437897209d0e86eaf3c8002112e6cd99e5b6e1d8ec4f21" exitCode=0 Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.559696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerDied","Data":"7e9b32886768a2d4ac437897209d0e86eaf3c8002112e6cd99e5b6e1d8ec4f21"} Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.775992 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.873822 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-bundle\") pod \"0dd66b27-5979-489a-8356-cb6d42b23c3a\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.873977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h6dt\" (UniqueName: \"kubernetes.io/projected/0dd66b27-5979-489a-8356-cb6d42b23c3a-kube-api-access-2h6dt\") pod \"0dd66b27-5979-489a-8356-cb6d42b23c3a\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.874018 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-util\") pod \"0dd66b27-5979-489a-8356-cb6d42b23c3a\" (UID: \"0dd66b27-5979-489a-8356-cb6d42b23c3a\") " Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.875114 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-bundle" (OuterVolumeSpecName: "bundle") pod "0dd66b27-5979-489a-8356-cb6d42b23c3a" (UID: "0dd66b27-5979-489a-8356-cb6d42b23c3a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.882840 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd66b27-5979-489a-8356-cb6d42b23c3a-kube-api-access-2h6dt" (OuterVolumeSpecName: "kube-api-access-2h6dt") pod "0dd66b27-5979-489a-8356-cb6d42b23c3a" (UID: "0dd66b27-5979-489a-8356-cb6d42b23c3a"). InnerVolumeSpecName "kube-api-access-2h6dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.889663 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-util" (OuterVolumeSpecName: "util") pod "0dd66b27-5979-489a-8356-cb6d42b23c3a" (UID: "0dd66b27-5979-489a-8356-cb6d42b23c3a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.975358 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h6dt\" (UniqueName: \"kubernetes.io/projected/0dd66b27-5979-489a-8356-cb6d42b23c3a-kube-api-access-2h6dt\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.975390 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-util\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:59 crc kubenswrapper[4895]: I1206 07:13:59.975399 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dd66b27-5979-489a-8356-cb6d42b23c3a-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:00 crc kubenswrapper[4895]: I1206 07:14:00.569182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerStarted","Data":"2322f7e4fbf61b5087d76e72994d7f84d1c3fb5c76963a013686da4eeb8d5002"} Dec 06 07:14:00 crc kubenswrapper[4895]: I1206 07:14:00.574464 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerStarted","Data":"b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c"} Dec 06 07:14:00 crc kubenswrapper[4895]: I1206 07:14:00.578071 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" Dec 06 07:14:00 crc kubenswrapper[4895]: I1206 07:14:00.581896 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs" event={"ID":"0dd66b27-5979-489a-8356-cb6d42b23c3a","Type":"ContainerDied","Data":"7ac441a02fb6bf3ddd762c7a857a1e169a922a79c19f1016bf0cc3db103661f3"} Dec 06 07:14:00 crc kubenswrapper[4895]: I1206 07:14:00.581980 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ac441a02fb6bf3ddd762c7a857a1e169a922a79c19f1016bf0cc3db103661f3" Dec 06 07:14:00 crc kubenswrapper[4895]: I1206 07:14:00.617688 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvb25" podStartSLOduration=2.024861518 podStartE2EDuration="4.61765931s" podCreationTimestamp="2025-12-06 07:13:56 +0000 UTC" firstStartedPulling="2025-12-06 07:13:57.542108993 +0000 UTC m=+999.943497863" lastFinishedPulling="2025-12-06 07:14:00.134906785 +0000 UTC m=+1002.536295655" observedRunningTime="2025-12-06 07:14:00.61539142 +0000 UTC m=+1003.016780310" watchObservedRunningTime="2025-12-06 07:14:00.61765931 +0000 UTC m=+1003.019048180" Dec 06 07:14:01 crc kubenswrapper[4895]: I1206 07:14:01.587504 4895 generic.go:334] "Generic (PLEG): container finished" podID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerID="2322f7e4fbf61b5087d76e72994d7f84d1c3fb5c76963a013686da4eeb8d5002" exitCode=0 Dec 06 07:14:01 crc kubenswrapper[4895]: I1206 07:14:01.587628 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerDied","Data":"2322f7e4fbf61b5087d76e72994d7f84d1c3fb5c76963a013686da4eeb8d5002"} Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.006246 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvzj8"] Dec 06 07:14:02 crc kubenswrapper[4895]: E1206 07:14:02.006897 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="extract" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.006922 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="extract" Dec 06 07:14:02 crc kubenswrapper[4895]: E1206 07:14:02.006945 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="util" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.006953 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="util" Dec 06 07:14:02 crc kubenswrapper[4895]: E1206 07:14:02.006968 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="pull" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.006976 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="pull" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.007132 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd66b27-5979-489a-8356-cb6d42b23c3a" containerName="extract" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.008004 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.028569 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvzj8"] Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.103069 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-utilities\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.103265 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-catalog-content\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.103285 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdns\" (UniqueName: \"kubernetes.io/projected/f53437b3-a833-4d54-a668-c47e74c73551-kube-api-access-kjdns\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.204027 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-catalog-content\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.204072 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdns\" (UniqueName: \"kubernetes.io/projected/f53437b3-a833-4d54-a668-c47e74c73551-kube-api-access-kjdns\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.204112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-utilities\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.204711 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-utilities\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.204761 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-catalog-content\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.238520 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdns\" (UniqueName: \"kubernetes.io/projected/f53437b3-a833-4d54-a668-c47e74c73551-kube-api-access-kjdns\") pod \"community-operators-zvzj8\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.328939 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.594573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerStarted","Data":"9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c"} Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.624497 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkdbw" podStartSLOduration=2.172688846 podStartE2EDuration="4.62445947s" podCreationTimestamp="2025-12-06 07:13:58 +0000 UTC" firstStartedPulling="2025-12-06 07:13:59.559845477 +0000 UTC m=+1001.961234347" lastFinishedPulling="2025-12-06 07:14:02.011616101 +0000 UTC m=+1004.413004971" observedRunningTime="2025-12-06 07:14:02.620826962 +0000 UTC m=+1005.022215832" watchObservedRunningTime="2025-12-06 07:14:02.62445947 +0000 UTC m=+1005.025848350" Dec 06 07:14:02 crc kubenswrapper[4895]: I1206 07:14:02.778499 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvzj8"] Dec 06 07:14:03 crc kubenswrapper[4895]: I1206 07:14:03.602509 4895 generic.go:334] "Generic (PLEG): container finished" podID="f53437b3-a833-4d54-a668-c47e74c73551" containerID="fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83" exitCode=0 Dec 06 07:14:03 crc kubenswrapper[4895]: I1206 07:14:03.602633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvzj8" event={"ID":"f53437b3-a833-4d54-a668-c47e74c73551","Type":"ContainerDied","Data":"fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83"} Dec 06 07:14:03 crc kubenswrapper[4895]: I1206 07:14:03.602910 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvzj8" event={"ID":"f53437b3-a833-4d54-a668-c47e74c73551","Type":"ContainerStarted","Data":"37a3c021029d64b02807a9bb7be5b1834897172244add7387aeb00a046f0597a"} Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.573005 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr"] Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.573788 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.575886 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.576523 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qgftt" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.577266 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.584500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr"] Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.638431 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5wv\" (UniqueName: \"kubernetes.io/projected/2b98ee67-9fd9-4fad-93a1-93d46ba12549-kube-api-access-nv5wv\") pod \"nmstate-operator-5b5b58f5c8-btgzr\" (UID: \"2b98ee67-9fd9-4fad-93a1-93d46ba12549\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.740453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv5wv\" (UniqueName: \"kubernetes.io/projected/2b98ee67-9fd9-4fad-93a1-93d46ba12549-kube-api-access-nv5wv\") pod \"nmstate-operator-5b5b58f5c8-btgzr\" (UID: \"2b98ee67-9fd9-4fad-93a1-93d46ba12549\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.759032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv5wv\" (UniqueName: \"kubernetes.io/projected/2b98ee67-9fd9-4fad-93a1-93d46ba12549-kube-api-access-nv5wv\") pod \"nmstate-operator-5b5b58f5c8-btgzr\" (UID: \"2b98ee67-9fd9-4fad-93a1-93d46ba12549\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" Dec 06 07:14:04 crc kubenswrapper[4895]: I1206 07:14:04.889786 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" Dec 06 07:14:05 crc kubenswrapper[4895]: I1206 07:14:05.091470 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr"] Dec 06 07:14:05 crc kubenswrapper[4895]: W1206 07:14:05.093589 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b98ee67_9fd9_4fad_93a1_93d46ba12549.slice/crio-567a74924ab085e7001ed988799574ded94841bf343c82aaca309a9f2dbd5c5c WatchSource:0}: Error finding container 567a74924ab085e7001ed988799574ded94841bf343c82aaca309a9f2dbd5c5c: Status 404 returned error can't find the container with id 567a74924ab085e7001ed988799574ded94841bf343c82aaca309a9f2dbd5c5c Dec 06 07:14:05 crc kubenswrapper[4895]: I1206 07:14:05.613347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" event={"ID":"2b98ee67-9fd9-4fad-93a1-93d46ba12549","Type":"ContainerStarted","Data":"567a74924ab085e7001ed988799574ded94841bf343c82aaca309a9f2dbd5c5c"} Dec 06 07:14:06 crc kubenswrapper[4895]: I1206 07:14:06.155649 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mvldw container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 07:14:06 crc kubenswrapper[4895]: I1206 07:14:06.155722 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" podUID="486a83f8-907b-441c-aae7-428a6e22d689" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 07:14:06 crc kubenswrapper[4895]: I1206 07:14:06.155670 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mvldw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 07:14:06 crc kubenswrapper[4895]: I1206 07:14:06.155792 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mvldw" podUID="486a83f8-907b-441c-aae7-428a6e22d689" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 07:14:07 crc kubenswrapper[4895]: I1206 07:14:07.149175 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:14:07 crc kubenswrapper[4895]: I1206 07:14:07.149359 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:14:07 crc kubenswrapper[4895]: I1206 07:14:07.199350 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:14:07 crc kubenswrapper[4895]: I1206 07:14:07.663813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:14:08 crc kubenswrapper[4895]: I1206 07:14:08.718893 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:14:08 crc kubenswrapper[4895]: I1206 07:14:08.718946 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:14:08 crc kubenswrapper[4895]: I1206 07:14:08.787732 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:14:09 crc kubenswrapper[4895]: I1206 07:14:09.701050 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:14:11 crc kubenswrapper[4895]: I1206 07:14:11.598623 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvb25"] Dec 06 07:14:11 crc kubenswrapper[4895]: I1206 07:14:11.599023 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvb25" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="registry-server" containerID="cri-o://b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c" gracePeriod=2 Dec 06 07:14:12 crc kubenswrapper[4895]: I1206 07:14:12.397907 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkdbw"] Dec 06 07:14:12 crc kubenswrapper[4895]: I1206 07:14:12.398542 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkdbw" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="registry-server" containerID="cri-o://9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c" gracePeriod=2 Dec 06 07:14:17 crc kubenswrapper[4895]: E1206 07:14:17.149360 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c is running failed: container process not found" containerID="b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:14:17 crc kubenswrapper[4895]: E1206 07:14:17.150034 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c is running failed: container process not found" containerID="b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:14:17 crc kubenswrapper[4895]: E1206 07:14:17.150337 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c is running failed: container process not found" containerID="b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:14:17 crc kubenswrapper[4895]: E1206 07:14:17.150376 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-rvb25" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="registry-server" Dec 06 07:14:17 crc kubenswrapper[4895]: I1206 07:14:17.690953 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mkdbw_e39444e8-27f7-4798-909f-c22ed5bc9384/registry-server/0.log" Dec 06 07:14:17 crc kubenswrapper[4895]: I1206 07:14:17.691761 4895 generic.go:334] "Generic (PLEG): container finished" podID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerID="9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c" exitCode=137 Dec 06 07:14:17 crc kubenswrapper[4895]: I1206 07:14:17.691797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerDied","Data":"9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c"} Dec 06 07:14:18 crc kubenswrapper[4895]: I1206 07:14:18.697851 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvb25_12b751e8-0620-43e3-b2fb-f38e7eaa0ffc/registry-server/0.log" Dec 06 07:14:18 crc kubenswrapper[4895]: I1206 07:14:18.698999 4895 generic.go:334] "Generic (PLEG): container finished" podID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerID="b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c" exitCode=137 Dec 06 07:14:18 crc kubenswrapper[4895]: I1206 07:14:18.699034 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerDied","Data":"b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c"} Dec 06 07:14:18 crc kubenswrapper[4895]: E1206 07:14:18.719277 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c is running failed: container process not found" containerID="9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:14:18 crc kubenswrapper[4895]: E1206 07:14:18.720314 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c is running failed: container process not found" containerID="9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:14:18 crc kubenswrapper[4895]: E1206 07:14:18.720765 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c is running failed: container process not found" containerID="9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:14:18 crc kubenswrapper[4895]: E1206 07:14:18.720828 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mkdbw" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="registry-server" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.445637 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mkdbw_e39444e8-27f7-4798-909f-c22ed5bc9384/registry-server/0.log" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.447682 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.532646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-utilities\") pod \"e39444e8-27f7-4798-909f-c22ed5bc9384\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.533000 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87b64\" (UniqueName: \"kubernetes.io/projected/e39444e8-27f7-4798-909f-c22ed5bc9384-kube-api-access-87b64\") pod \"e39444e8-27f7-4798-909f-c22ed5bc9384\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.533032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-catalog-content\") pod \"e39444e8-27f7-4798-909f-c22ed5bc9384\" (UID: \"e39444e8-27f7-4798-909f-c22ed5bc9384\") " Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.533578 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-utilities" (OuterVolumeSpecName: "utilities") pod "e39444e8-27f7-4798-909f-c22ed5bc9384" (UID: "e39444e8-27f7-4798-909f-c22ed5bc9384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.541726 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39444e8-27f7-4798-909f-c22ed5bc9384-kube-api-access-87b64" (OuterVolumeSpecName: "kube-api-access-87b64") pod "e39444e8-27f7-4798-909f-c22ed5bc9384" (UID: "e39444e8-27f7-4798-909f-c22ed5bc9384"). InnerVolumeSpecName "kube-api-access-87b64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.616638 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvb25_12b751e8-0620-43e3-b2fb-f38e7eaa0ffc/registry-server/0.log" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.617558 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.634900 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.634980 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87b64\" (UniqueName: \"kubernetes.io/projected/e39444e8-27f7-4798-909f-c22ed5bc9384-kube-api-access-87b64\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.731250 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mkdbw_e39444e8-27f7-4798-909f-c22ed5bc9384/registry-server/0.log" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.732408 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkdbw" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.732797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkdbw" event={"ID":"e39444e8-27f7-4798-909f-c22ed5bc9384","Type":"ContainerDied","Data":"3c0fbeddcc4d2be894952336befb0e09e46c6112ab8b03c51067538464f17735"} Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.732909 4895 scope.go:117] "RemoveContainer" containerID="9a49b91f8139acb101dfffb432545997da4a80d28e26f2958ef5c167fe037c4c" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.735718 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvb25_12b751e8-0620-43e3-b2fb-f38e7eaa0ffc/registry-server/0.log" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.736019 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-catalog-content\") pod \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.736094 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-utilities\") pod \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.736178 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zgrz\" (UniqueName: \"kubernetes.io/projected/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-kube-api-access-2zgrz\") pod \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\" (UID: \"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc\") " Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.737068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-utilities" (OuterVolumeSpecName: "utilities") pod "12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" (UID: "12b751e8-0620-43e3-b2fb-f38e7eaa0ffc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.737103 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvb25" event={"ID":"12b751e8-0620-43e3-b2fb-f38e7eaa0ffc","Type":"ContainerDied","Data":"af1bf57593a46b6605b897945afdd584ca073a6226bc4200c5bccdabf3e129bd"} Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.737221 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvb25" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.740720 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-kube-api-access-2zgrz" (OuterVolumeSpecName: "kube-api-access-2zgrz") pod "12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" (UID: "12b751e8-0620-43e3-b2fb-f38e7eaa0ffc"). InnerVolumeSpecName "kube-api-access-2zgrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.821745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e39444e8-27f7-4798-909f-c22ed5bc9384" (UID: "e39444e8-27f7-4798-909f-c22ed5bc9384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.840131 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.840171 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39444e8-27f7-4798-909f-c22ed5bc9384-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.840182 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zgrz\" (UniqueName: \"kubernetes.io/projected/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-kube-api-access-2zgrz\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.848516 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" (UID: "12b751e8-0620-43e3-b2fb-f38e7eaa0ffc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:22 crc kubenswrapper[4895]: I1206 07:14:22.941033 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.083157 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvb25"] Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.088881 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvb25"] Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.092889 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkdbw"] Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.097636 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkdbw"] Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.682493 4895 scope.go:117] "RemoveContainer" containerID="2322f7e4fbf61b5087d76e72994d7f84d1c3fb5c76963a013686da4eeb8d5002" Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.753379 4895 scope.go:117] "RemoveContainer" containerID="cf9b6c395f826cfb9081be59bcd1b94dd59585b9d1b12a8b49259e4e5f4a579b" Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.774423 4895 scope.go:117] "RemoveContainer" containerID="b72e6f16d920cc5f2dab79dfa96b006d9b1ba340055e80ffcaf6b28c37a3a93c" Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.795566 4895 scope.go:117] "RemoveContainer" containerID="7e9b32886768a2d4ac437897209d0e86eaf3c8002112e6cd99e5b6e1d8ec4f21" Dec 06 07:14:23 crc kubenswrapper[4895]: I1206 07:14:23.830746 4895 scope.go:117] "RemoveContainer" containerID="6e0bf32e3ed4363033b3a7cc47ff919d682bafc81bdb946d676843a1fdacfef6" Dec 06 07:14:24 crc kubenswrapper[4895]: I1206 07:14:24.060687 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" path="/var/lib/kubelet/pods/12b751e8-0620-43e3-b2fb-f38e7eaa0ffc/volumes" Dec 06 07:14:24 crc kubenswrapper[4895]: I1206 07:14:24.061613 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" path="/var/lib/kubelet/pods/e39444e8-27f7-4798-909f-c22ed5bc9384/volumes" Dec 06 07:14:24 crc kubenswrapper[4895]: I1206 07:14:24.752034 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" event={"ID":"2b98ee67-9fd9-4fad-93a1-93d46ba12549","Type":"ContainerStarted","Data":"8a612e0be8ee6e30b4f0ef9d714036f86334c43cd35bba30db3779d1b8a6f351"} Dec 06 07:14:24 crc kubenswrapper[4895]: I1206 07:14:24.755636 4895 generic.go:334] "Generic (PLEG): container finished" podID="f53437b3-a833-4d54-a668-c47e74c73551" containerID="740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371" exitCode=0 Dec 06 07:14:24 crc kubenswrapper[4895]: I1206 07:14:24.755696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvzj8" event={"ID":"f53437b3-a833-4d54-a668-c47e74c73551","Type":"ContainerDied","Data":"740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371"} Dec 06 07:14:24 crc kubenswrapper[4895]: I1206 07:14:24.771284 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-btgzr" podStartSLOduration=2.092421204 podStartE2EDuration="20.77125356s" podCreationTimestamp="2025-12-06 07:14:04 +0000 UTC" firstStartedPulling="2025-12-06 07:14:05.095805633 +0000 UTC m=+1007.497194503" lastFinishedPulling="2025-12-06 07:14:23.774637989 +0000 UTC m=+1026.176026859" observedRunningTime="2025-12-06 07:14:24.769791221 +0000 UTC m=+1027.171180131" watchObservedRunningTime="2025-12-06 07:14:24.77125356 +0000 UTC m=+1027.172642430" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.764308 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvzj8" event={"ID":"f53437b3-a833-4d54-a668-c47e74c73551","Type":"ContainerStarted","Data":"565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd"} Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780443 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz"] Dec 06 07:14:25 crc kubenswrapper[4895]: E1206 07:14:25.780793 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="extract-utilities" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780813 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="extract-utilities" Dec 06 07:14:25 crc kubenswrapper[4895]: E1206 07:14:25.780825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="extract-content" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780834 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="extract-content" Dec 06 07:14:25 crc kubenswrapper[4895]: E1206 07:14:25.780844 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="extract-utilities" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780850 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="extract-utilities" Dec 06 07:14:25 crc kubenswrapper[4895]: E1206 07:14:25.780863 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="registry-server" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780869 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="registry-server" Dec 06 07:14:25 crc kubenswrapper[4895]: E1206 07:14:25.780879 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="extract-content" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780885 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="extract-content" Dec 06 07:14:25 crc kubenswrapper[4895]: E1206 07:14:25.780894 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="registry-server" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.780901 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="registry-server" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.781016 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39444e8-27f7-4798-909f-c22ed5bc9384" containerName="registry-server" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.781034 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b751e8-0620-43e3-b2fb-f38e7eaa0ffc" containerName="registry-server" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.781814 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.784797 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jcdhz" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.791585 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz"] Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.792351 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvzj8" podStartSLOduration=3.077207537 podStartE2EDuration="24.792339432s" podCreationTimestamp="2025-12-06 07:14:01 +0000 UTC" firstStartedPulling="2025-12-06 07:14:03.604321009 +0000 UTC m=+1006.005709879" lastFinishedPulling="2025-12-06 07:14:25.319452914 +0000 UTC m=+1027.720841774" observedRunningTime="2025-12-06 07:14:25.78853087 +0000 UTC m=+1028.189919740" watchObservedRunningTime="2025-12-06 07:14:25.792339432 +0000 UTC m=+1028.193728302" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.831631 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb"] Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.832537 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.836264 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vsvgr"] Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.836869 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.838550 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.846438 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb"] Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.880116 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvlw\" (UniqueName: \"kubernetes.io/projected/a15dc415-ffeb-45f0-b298-9ac866573b57-kube-api-access-5nvlw\") pod \"nmstate-metrics-7f946cbc9-ddvxz\" (UID: \"a15dc415-ffeb-45f0-b298-9ac866573b57\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.979661 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm"] Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.980578 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985131 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-ovs-socket\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/c920fe00-1363-44b7-8830-15e9df2f685a-kube-api-access-ltc4r\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985260 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-dbus-socket\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985297 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nvlw\" (UniqueName: \"kubernetes.io/projected/a15dc415-ffeb-45f0-b298-9ac866573b57-kube-api-access-5nvlw\") pod \"nmstate-metrics-7f946cbc9-ddvxz\" (UID: \"a15dc415-ffeb-45f0-b298-9ac866573b57\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985330 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-nmstate-lock\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985382 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvrx2\" (UniqueName: \"kubernetes.io/projected/c397bd3c-149a-4d07-94ee-053ad003b83c-kube-api-access-kvrx2\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985448 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c920fe00-1363-44b7-8830-15e9df2f685a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985883 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985923 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.985963 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5gnh6" Dec 06 07:14:25 crc kubenswrapper[4895]: I1206 07:14:25.997885 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm"] Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.020216 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nvlw\" (UniqueName: \"kubernetes.io/projected/a15dc415-ffeb-45f0-b298-9ac866573b57-kube-api-access-5nvlw\") pod \"nmstate-metrics-7f946cbc9-ddvxz\" (UID: \"a15dc415-ffeb-45f0-b298-9ac866573b57\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086303 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c920fe00-1363-44b7-8830-15e9df2f685a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086349 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-ovs-socket\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086375 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f333f961-aeed-4f1c-9e25-cc50d6ace30f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086402 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/c920fe00-1363-44b7-8830-15e9df2f685a-kube-api-access-ltc4r\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086424 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-dbus-socket\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f333f961-aeed-4f1c-9e25-cc50d6ace30f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086483 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-nmstate-lock\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-nmstate-lock\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: E1206 07:14:26.086607 4895 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086652 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-ovs-socket\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: E1206 07:14:26.086723 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c920fe00-1363-44b7-8830-15e9df2f685a-tls-key-pair podName:c920fe00-1363-44b7-8830-15e9df2f685a nodeName:}" failed. No retries permitted until 2025-12-06 07:14:26.586693381 +0000 UTC m=+1028.988082251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c920fe00-1363-44b7-8830-15e9df2f685a-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-56vwb" (UID: "c920fe00-1363-44b7-8830-15e9df2f685a") : secret "openshift-nmstate-webhook" not found Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c397bd3c-149a-4d07-94ee-053ad003b83c-dbus-socket\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.086784 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvrx2\" (UniqueName: \"kubernetes.io/projected/c397bd3c-149a-4d07-94ee-053ad003b83c-kube-api-access-kvrx2\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.087723 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89v24\" (UniqueName: \"kubernetes.io/projected/f333f961-aeed-4f1c-9e25-cc50d6ace30f-kube-api-access-89v24\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.097784 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.109159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/c920fe00-1363-44b7-8830-15e9df2f685a-kube-api-access-ltc4r\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.115287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvrx2\" (UniqueName: \"kubernetes.io/projected/c397bd3c-149a-4d07-94ee-053ad003b83c-kube-api-access-kvrx2\") pod \"nmstate-handler-vsvgr\" (UID: \"c397bd3c-149a-4d07-94ee-053ad003b83c\") " pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.163847 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.188974 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89v24\" (UniqueName: \"kubernetes.io/projected/f333f961-aeed-4f1c-9e25-cc50d6ace30f-kube-api-access-89v24\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.189551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f333f961-aeed-4f1c-9e25-cc50d6ace30f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.189614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f333f961-aeed-4f1c-9e25-cc50d6ace30f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.191671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f333f961-aeed-4f1c-9e25-cc50d6ace30f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.202401 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f333f961-aeed-4f1c-9e25-cc50d6ace30f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.222225 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89v24\" (UniqueName: \"kubernetes.io/projected/f333f961-aeed-4f1c-9e25-cc50d6ace30f-kube-api-access-89v24\") pod \"nmstate-console-plugin-7fbb5f6569-xnpxm\" (UID: \"f333f961-aeed-4f1c-9e25-cc50d6ace30f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: W1206 07:14:26.228905 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc397bd3c_149a_4d07_94ee_053ad003b83c.slice/crio-05ebf6743b6c60d3b6e9bc2506a7d22a1cc78a379ca7b51f42d8026fb281bb5f WatchSource:0}: Error finding container 05ebf6743b6c60d3b6e9bc2506a7d22a1cc78a379ca7b51f42d8026fb281bb5f: Status 404 returned error can't find the container with id 05ebf6743b6c60d3b6e9bc2506a7d22a1cc78a379ca7b51f42d8026fb281bb5f Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.280992 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fc5c8f49-5l4l2"] Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.286267 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.295339 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc5c8f49-5l4l2"] Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.309619 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.394841 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-config\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.394916 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-oauth-serving-cert\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.394945 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsh4g\" (UniqueName: \"kubernetes.io/projected/bd555a10-3ac0-4fa8-9a31-1cf084acc803-kube-api-access-zsh4g\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.394981 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-trusted-ca-bundle\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.395021 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-service-ca\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.395047 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-serving-cert\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.395086 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-oauth-config\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.474388 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz"] Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496627 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-oauth-config\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496691 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-config\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-oauth-serving-cert\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsh4g\" (UniqueName: \"kubernetes.io/projected/bd555a10-3ac0-4fa8-9a31-1cf084acc803-kube-api-access-zsh4g\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-trusted-ca-bundle\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496861 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-service-ca\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.496906 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-serving-cert\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.497822 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-service-ca\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.497838 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-config\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.497838 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-oauth-serving-cert\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.498170 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd555a10-3ac0-4fa8-9a31-1cf084acc803-trusted-ca-bundle\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.501690 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-oauth-config\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.502190 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd555a10-3ac0-4fa8-9a31-1cf084acc803-console-serving-cert\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.515491 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsh4g\" (UniqueName: \"kubernetes.io/projected/bd555a10-3ac0-4fa8-9a31-1cf084acc803-kube-api-access-zsh4g\") pod \"console-6fc5c8f49-5l4l2\" (UID: \"bd555a10-3ac0-4fa8-9a31-1cf084acc803\") " pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.570684 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm"] Dec 06 07:14:26 crc kubenswrapper[4895]: W1206 07:14:26.579219 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf333f961_aeed_4f1c_9e25_cc50d6ace30f.slice/crio-d4f4093c8fe740cde609c12fe77002f3ddfeb9d421ad45c8a68aceb2d54f69a9 WatchSource:0}: Error finding container d4f4093c8fe740cde609c12fe77002f3ddfeb9d421ad45c8a68aceb2d54f69a9: Status 404 returned error can't find the container with id d4f4093c8fe740cde609c12fe77002f3ddfeb9d421ad45c8a68aceb2d54f69a9 Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.598174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c920fe00-1363-44b7-8830-15e9df2f685a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.601267 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c920fe00-1363-44b7-8830-15e9df2f685a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-56vwb\" (UID: \"c920fe00-1363-44b7-8830-15e9df2f685a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.617294 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.753997 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.773663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" event={"ID":"a15dc415-ffeb-45f0-b298-9ac866573b57","Type":"ContainerStarted","Data":"06896f5ada9e068ca5dd20392c1b69ce7562f0314b81262b831903e76337e79e"} Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.774867 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vsvgr" event={"ID":"c397bd3c-149a-4d07-94ee-053ad003b83c","Type":"ContainerStarted","Data":"05ebf6743b6c60d3b6e9bc2506a7d22a1cc78a379ca7b51f42d8026fb281bb5f"} Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.775640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" event={"ID":"f333f961-aeed-4f1c-9e25-cc50d6ace30f","Type":"ContainerStarted","Data":"d4f4093c8fe740cde609c12fe77002f3ddfeb9d421ad45c8a68aceb2d54f69a9"} Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.796905 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc5c8f49-5l4l2"] Dec 06 07:14:26 crc kubenswrapper[4895]: W1206 07:14:26.807252 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd555a10_3ac0_4fa8_9a31_1cf084acc803.slice/crio-c117cbd0bc135c4dc9a9d16173cb68b7f0c2af43fcf9528800a9888a7e648b22 WatchSource:0}: Error finding container c117cbd0bc135c4dc9a9d16173cb68b7f0c2af43fcf9528800a9888a7e648b22: Status 404 returned error can't find the container with id c117cbd0bc135c4dc9a9d16173cb68b7f0c2af43fcf9528800a9888a7e648b22 Dec 06 07:14:26 crc kubenswrapper[4895]: I1206 07:14:26.962563 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb"] Dec 06 07:14:26 crc kubenswrapper[4895]: W1206 07:14:26.968684 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc920fe00_1363_44b7_8830_15e9df2f685a.slice/crio-43bb31f5bfc4ad0992d4ff88cfecba853f00761b47a2aafb6fc2adfd9821f4d5 WatchSource:0}: Error finding container 43bb31f5bfc4ad0992d4ff88cfecba853f00761b47a2aafb6fc2adfd9821f4d5: Status 404 returned error can't find the container with id 43bb31f5bfc4ad0992d4ff88cfecba853f00761b47a2aafb6fc2adfd9821f4d5 Dec 06 07:14:27 crc kubenswrapper[4895]: I1206 07:14:27.784637 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc5c8f49-5l4l2" event={"ID":"bd555a10-3ac0-4fa8-9a31-1cf084acc803","Type":"ContainerStarted","Data":"fb4d1a925837219c73f0f091bbb8fb3e754b38822ec619a2fb7f92df0b75f8b8"} Dec 06 07:14:27 crc kubenswrapper[4895]: I1206 07:14:27.784695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc5c8f49-5l4l2" event={"ID":"bd555a10-3ac0-4fa8-9a31-1cf084acc803","Type":"ContainerStarted","Data":"c117cbd0bc135c4dc9a9d16173cb68b7f0c2af43fcf9528800a9888a7e648b22"} Dec 06 07:14:27 crc kubenswrapper[4895]: I1206 07:14:27.786342 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" event={"ID":"c920fe00-1363-44b7-8830-15e9df2f685a","Type":"ContainerStarted","Data":"43bb31f5bfc4ad0992d4ff88cfecba853f00761b47a2aafb6fc2adfd9821f4d5"} Dec 06 07:14:27 crc kubenswrapper[4895]: I1206 07:14:27.802290 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fc5c8f49-5l4l2" podStartSLOduration=1.802271246 podStartE2EDuration="1.802271246s" podCreationTimestamp="2025-12-06 07:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:14:27.799700876 +0000 UTC m=+1030.201089756" watchObservedRunningTime="2025-12-06 07:14:27.802271246 +0000 UTC m=+1030.203660106" Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.808297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" event={"ID":"f333f961-aeed-4f1c-9e25-cc50d6ace30f","Type":"ContainerStarted","Data":"21b426cf00fb785afc72fd32c9633ccb525c14c211dca5a414393323e5590840"} Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.809732 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" event={"ID":"a15dc415-ffeb-45f0-b298-9ac866573b57","Type":"ContainerStarted","Data":"0ea3bcfef8baa6c80e4b7535c317c82b05d00d3781ab68940afa68f89d3af4c6"} Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.812289 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vsvgr" event={"ID":"c397bd3c-149a-4d07-94ee-053ad003b83c","Type":"ContainerStarted","Data":"83136563551e2ba723b03e6b65044b82215a127cd8e91d8f5688de960a5daeb3"} Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.812482 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.814259 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" event={"ID":"c920fe00-1363-44b7-8830-15e9df2f685a","Type":"ContainerStarted","Data":"72f5285f783ee77bcc4a98735ff244c74f2f057c34a2dc415c6660196e9a4b5d"} Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.814639 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.828390 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xnpxm" podStartSLOduration=2.156975891 podStartE2EDuration="5.828369698s" podCreationTimestamp="2025-12-06 07:14:25 +0000 UTC" firstStartedPulling="2025-12-06 07:14:26.587309859 +0000 UTC m=+1028.988698739" lastFinishedPulling="2025-12-06 07:14:30.258703676 +0000 UTC m=+1032.660092546" observedRunningTime="2025-12-06 07:14:30.819933631 +0000 UTC m=+1033.221322501" watchObservedRunningTime="2025-12-06 07:14:30.828369698 +0000 UTC m=+1033.229758568" Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.842273 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vsvgr" podStartSLOduration=1.8218048 podStartE2EDuration="5.842254033s" podCreationTimestamp="2025-12-06 07:14:25 +0000 UTC" firstStartedPulling="2025-12-06 07:14:26.238371006 +0000 UTC m=+1028.639759876" lastFinishedPulling="2025-12-06 07:14:30.258820229 +0000 UTC m=+1032.660209109" observedRunningTime="2025-12-06 07:14:30.841519174 +0000 UTC m=+1033.242908044" watchObservedRunningTime="2025-12-06 07:14:30.842254033 +0000 UTC m=+1033.243642893" Dec 06 07:14:30 crc kubenswrapper[4895]: I1206 07:14:30.865438 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" podStartSLOduration=2.570810936 podStartE2EDuration="5.865413589s" podCreationTimestamp="2025-12-06 07:14:25 +0000 UTC" firstStartedPulling="2025-12-06 07:14:26.971587706 +0000 UTC m=+1029.372976576" lastFinishedPulling="2025-12-06 07:14:30.266190359 +0000 UTC m=+1032.667579229" observedRunningTime="2025-12-06 07:14:30.861887994 +0000 UTC m=+1033.263276864" watchObservedRunningTime="2025-12-06 07:14:30.865413589 +0000 UTC m=+1033.266802459" Dec 06 07:14:32 crc kubenswrapper[4895]: I1206 07:14:32.329387 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:32 crc kubenswrapper[4895]: I1206 07:14:32.330264 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:32 crc kubenswrapper[4895]: I1206 07:14:32.372748 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:32 crc kubenswrapper[4895]: I1206 07:14:32.873963 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:33 crc kubenswrapper[4895]: I1206 07:14:33.208524 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvzj8"] Dec 06 07:14:34 crc kubenswrapper[4895]: I1206 07:14:34.837526 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zvzj8" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="registry-server" containerID="cri-o://565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd" gracePeriod=2 Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.674934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.820133 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-catalog-content\") pod \"f53437b3-a833-4d54-a668-c47e74c73551\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.820229 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-utilities\") pod \"f53437b3-a833-4d54-a668-c47e74c73551\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.820336 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjdns\" (UniqueName: \"kubernetes.io/projected/f53437b3-a833-4d54-a668-c47e74c73551-kube-api-access-kjdns\") pod \"f53437b3-a833-4d54-a668-c47e74c73551\" (UID: \"f53437b3-a833-4d54-a668-c47e74c73551\") " Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.821404 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-utilities" (OuterVolumeSpecName: "utilities") pod "f53437b3-a833-4d54-a668-c47e74c73551" (UID: "f53437b3-a833-4d54-a668-c47e74c73551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.827207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53437b3-a833-4d54-a668-c47e74c73551-kube-api-access-kjdns" (OuterVolumeSpecName: "kube-api-access-kjdns") pod "f53437b3-a833-4d54-a668-c47e74c73551" (UID: "f53437b3-a833-4d54-a668-c47e74c73551"). InnerVolumeSpecName "kube-api-access-kjdns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.844911 4895 generic.go:334] "Generic (PLEG): container finished" podID="f53437b3-a833-4d54-a668-c47e74c73551" containerID="565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd" exitCode=0 Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.844958 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvzj8" event={"ID":"f53437b3-a833-4d54-a668-c47e74c73551","Type":"ContainerDied","Data":"565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd"} Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.844987 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvzj8" event={"ID":"f53437b3-a833-4d54-a668-c47e74c73551","Type":"ContainerDied","Data":"37a3c021029d64b02807a9bb7be5b1834897172244add7387aeb00a046f0597a"} Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.845007 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvzj8" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.845009 4895 scope.go:117] "RemoveContainer" containerID="565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.863712 4895 scope.go:117] "RemoveContainer" containerID="740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.879559 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f53437b3-a833-4d54-a668-c47e74c73551" (UID: "f53437b3-a833-4d54-a668-c47e74c73551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.880611 4895 scope.go:117] "RemoveContainer" containerID="fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.896876 4895 scope.go:117] "RemoveContainer" containerID="565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd" Dec 06 07:14:35 crc kubenswrapper[4895]: E1206 07:14:35.897328 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd\": container with ID starting with 565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd not found: ID does not exist" containerID="565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.897358 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd"} err="failed to get container status \"565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd\": rpc error: code = NotFound desc = could not find container \"565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd\": container with ID starting with 565af32da35177d0ab8c82709d6f58a9cba1de97dcf8ddb084f305e65e52cffd not found: ID does not exist" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.897379 4895 scope.go:117] "RemoveContainer" containerID="740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371" Dec 06 07:14:35 crc kubenswrapper[4895]: E1206 07:14:35.897800 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371\": container with ID starting with 740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371 not found: ID does not exist" containerID="740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.897830 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371"} err="failed to get container status \"740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371\": rpc error: code = NotFound desc = could not find container \"740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371\": container with ID starting with 740f2d476e5431a9f3980b97e729b3d19f93e8c0fa43ab1a9e80a0455464a371 not found: ID does not exist" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.897848 4895 scope.go:117] "RemoveContainer" containerID="fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83" Dec 06 07:14:35 crc kubenswrapper[4895]: E1206 07:14:35.898133 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83\": container with ID starting with fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83 not found: ID does not exist" containerID="fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.898161 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83"} err="failed to get container status \"fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83\": rpc error: code = NotFound desc = could not find container \"fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83\": container with ID starting with fd962e84027ea2debc9a0bffa8ce9d9fbb459673bdd3886305752277d02c5c83 not found: ID does not exist" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.922226 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjdns\" (UniqueName: \"kubernetes.io/projected/f53437b3-a833-4d54-a668-c47e74c73551-kube-api-access-kjdns\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.922283 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:35 crc kubenswrapper[4895]: I1206 07:14:35.922297 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f53437b3-a833-4d54-a668-c47e74c73551-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.170625 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvzj8"] Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.177703 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zvzj8"] Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.185366 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vsvgr" Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.617705 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.617750 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.623010 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.853099 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fc5c8f49-5l4l2" Dec 06 07:14:36 crc kubenswrapper[4895]: I1206 07:14:36.909938 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-52v65"] Dec 06 07:14:38 crc kubenswrapper[4895]: I1206 07:14:38.057645 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53437b3-a833-4d54-a668-c47e74c73551" path="/var/lib/kubelet/pods/f53437b3-a833-4d54-a668-c47e74c73551/volumes" Dec 06 07:14:38 crc kubenswrapper[4895]: I1206 07:14:38.861807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" event={"ID":"a15dc415-ffeb-45f0-b298-9ac866573b57","Type":"ContainerStarted","Data":"d2a38f2632721b42f6e253d3c8b43d788d5d639e7acb3888b73658b1caf58429"} Dec 06 07:14:38 crc kubenswrapper[4895]: I1206 07:14:38.879746 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddvxz" podStartSLOduration=2.502150179 podStartE2EDuration="13.879726569s" podCreationTimestamp="2025-12-06 07:14:25 +0000 UTC" firstStartedPulling="2025-12-06 07:14:26.491023178 +0000 UTC m=+1028.892412048" lastFinishedPulling="2025-12-06 07:14:37.868599568 +0000 UTC m=+1040.269988438" observedRunningTime="2025-12-06 07:14:38.876984904 +0000 UTC m=+1041.278373784" watchObservedRunningTime="2025-12-06 07:14:38.879726569 +0000 UTC m=+1041.281115439" Dec 06 07:14:46 crc kubenswrapper[4895]: I1206 07:14:46.760547 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-56vwb" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.167631 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b"] Dec 06 07:15:00 crc kubenswrapper[4895]: E1206 07:15:00.168402 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.168416 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4895]: E1206 07:15:00.168433 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.168440 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4895]: E1206 07:15:00.168456 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.168463 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.168600 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53437b3-a833-4d54-a668-c47e74c73551" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.169040 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.170943 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.178917 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.183410 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b"] Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.264141 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk87v\" (UniqueName: \"kubernetes.io/projected/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-kube-api-access-bk87v\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.264217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-config-volume\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.264274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-secret-volume\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.365549 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-secret-volume\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.365659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk87v\" (UniqueName: \"kubernetes.io/projected/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-kube-api-access-bk87v\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.365707 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-config-volume\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.367198 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-config-volume\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.371646 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-secret-volume\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.381560 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk87v\" (UniqueName: \"kubernetes.io/projected/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-kube-api-access-bk87v\") pod \"collect-profiles-29416755-f298b\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.492729 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.683484 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b"] Dec 06 07:15:00 crc kubenswrapper[4895]: I1206 07:15:00.985853 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" event={"ID":"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3","Type":"ContainerStarted","Data":"86b9b671a2ca920cb8bbffa8b79216da8955b15cfb1caacb1b2d29169c350dca"} Dec 06 07:15:01 crc kubenswrapper[4895]: I1206 07:15:01.960340 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-52v65" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" containerName="console" containerID="cri-o://af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59" gracePeriod=15 Dec 06 07:15:01 crc kubenswrapper[4895]: I1206 07:15:01.993260 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" event={"ID":"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3","Type":"ContainerStarted","Data":"a2027e2aff7e7b2ff5e79a18fc4f638269db35b081fbaf4eae568ec715ca475b"} Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.013430 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" podStartSLOduration=2.013409659 podStartE2EDuration="2.013409659s" podCreationTimestamp="2025-12-06 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:15:02.007349475 +0000 UTC m=+1064.408738345" watchObservedRunningTime="2025-12-06 07:15:02.013409659 +0000 UTC m=+1064.414798529" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.490884 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-52v65_3864596e-56f8-46a1-95e6-3558c161cd02/console/0.log" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.491171 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-52v65" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596107 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-oauth-serving-cert\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596219 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4ntp\" (UniqueName: \"kubernetes.io/projected/3864596e-56f8-46a1-95e6-3558c161cd02-kube-api-access-q4ntp\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596278 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-service-ca\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596348 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-trusted-ca-bundle\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596393 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-oauth-config\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596436 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-serving-cert\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.596603 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-console-config\") pod \"3864596e-56f8-46a1-95e6-3558c161cd02\" (UID: \"3864596e-56f8-46a1-95e6-3558c161cd02\") " Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.597128 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.597139 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-service-ca" (OuterVolumeSpecName: "service-ca") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.597223 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-console-config" (OuterVolumeSpecName: "console-config") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.597407 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.602217 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.602583 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.602655 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3864596e-56f8-46a1-95e6-3558c161cd02-kube-api-access-q4ntp" (OuterVolumeSpecName: "kube-api-access-q4ntp") pod "3864596e-56f8-46a1-95e6-3558c161cd02" (UID: "3864596e-56f8-46a1-95e6-3558c161cd02"). InnerVolumeSpecName "kube-api-access-q4ntp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698397 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698437 4895 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698527 4895 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3864596e-56f8-46a1-95e6-3558c161cd02-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698538 4895 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698549 4895 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698558 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4ntp\" (UniqueName: \"kubernetes.io/projected/3864596e-56f8-46a1-95e6-3558c161cd02-kube-api-access-q4ntp\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:02 crc kubenswrapper[4895]: I1206 07:15:02.698569 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3864596e-56f8-46a1-95e6-3558c161cd02-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.001893 4895 generic.go:334] "Generic (PLEG): container finished" podID="42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" containerID="a2027e2aff7e7b2ff5e79a18fc4f638269db35b081fbaf4eae568ec715ca475b" exitCode=0 Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.001982 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" event={"ID":"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3","Type":"ContainerDied","Data":"a2027e2aff7e7b2ff5e79a18fc4f638269db35b081fbaf4eae568ec715ca475b"} Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.004401 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-52v65_3864596e-56f8-46a1-95e6-3558c161cd02/console/0.log" Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.004466 4895 generic.go:334] "Generic (PLEG): container finished" podID="3864596e-56f8-46a1-95e6-3558c161cd02" containerID="af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59" exitCode=2 Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.004534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-52v65" event={"ID":"3864596e-56f8-46a1-95e6-3558c161cd02","Type":"ContainerDied","Data":"af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59"} Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.004572 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-52v65" event={"ID":"3864596e-56f8-46a1-95e6-3558c161cd02","Type":"ContainerDied","Data":"cf098386792cbcf5172ec12d6de2bfc9397cf076fd6d9c62a238811591570614"} Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.004592 4895 scope.go:117] "RemoveContainer" containerID="af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59" Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.004594 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-52v65" Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.026717 4895 scope.go:117] "RemoveContainer" containerID="af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59" Dec 06 07:15:03 crc kubenswrapper[4895]: E1206 07:15:03.027273 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59\": container with ID starting with af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59 not found: ID does not exist" containerID="af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59" Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.027331 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59"} err="failed to get container status \"af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59\": rpc error: code = NotFound desc = could not find container \"af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59\": container with ID starting with af1e2e6a2640257b5d7d91bb0b4f26c9a46a5de698f31f49a56f60a102785c59 not found: ID does not exist" Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.039334 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-52v65"] Dec 06 07:15:03 crc kubenswrapper[4895]: I1206 07:15:03.042786 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-52v65"] Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.064890 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" path="/var/lib/kubelet/pods/3864596e-56f8-46a1-95e6-3558c161cd02/volumes" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.071704 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch"] Dec 06 07:15:04 crc kubenswrapper[4895]: E1206 07:15:04.071939 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" containerName="console" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.071954 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" containerName="console" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.072069 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3864596e-56f8-46a1-95e6-3558c161cd02" containerName="console" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.072891 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.075098 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.080980 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch"] Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.219309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.219392 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglfs\" (UniqueName: \"kubernetes.io/projected/86d20cc9-915a-4f28-802c-c2d656de5763-kube-api-access-fglfs\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.219686 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.288201 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.320980 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.321056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.321164 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglfs\" (UniqueName: \"kubernetes.io/projected/86d20cc9-915a-4f28-802c-c2d656de5763-kube-api-access-fglfs\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.321998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.322110 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.342357 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglfs\" (UniqueName: \"kubernetes.io/projected/86d20cc9-915a-4f28-802c-c2d656de5763-kube-api-access-fglfs\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.390233 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.423013 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-config-volume\") pod \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.423709 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk87v\" (UniqueName: \"kubernetes.io/projected/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-kube-api-access-bk87v\") pod \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.423806 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-secret-volume\") pod \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\" (UID: \"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3\") " Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.424081 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" (UID: "42c8dd3d-28ae-4572-b7a0-60f26e47a4a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.427229 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-kube-api-access-bk87v" (OuterVolumeSpecName: "kube-api-access-bk87v") pod "42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" (UID: "42c8dd3d-28ae-4572-b7a0-60f26e47a4a3"). InnerVolumeSpecName "kube-api-access-bk87v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.427707 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" (UID: "42c8dd3d-28ae-4572-b7a0-60f26e47a4a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.525616 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.525669 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk87v\" (UniqueName: \"kubernetes.io/projected/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-kube-api-access-bk87v\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.525693 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4895]: I1206 07:15:04.578139 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch"] Dec 06 07:15:05 crc kubenswrapper[4895]: I1206 07:15:05.020539 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" Dec 06 07:15:05 crc kubenswrapper[4895]: I1206 07:15:05.020542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b" event={"ID":"42c8dd3d-28ae-4572-b7a0-60f26e47a4a3","Type":"ContainerDied","Data":"86b9b671a2ca920cb8bbffa8b79216da8955b15cfb1caacb1b2d29169c350dca"} Dec 06 07:15:05 crc kubenswrapper[4895]: I1206 07:15:05.020675 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b9b671a2ca920cb8bbffa8b79216da8955b15cfb1caacb1b2d29169c350dca" Dec 06 07:15:05 crc kubenswrapper[4895]: I1206 07:15:05.022969 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" event={"ID":"86d20cc9-915a-4f28-802c-c2d656de5763","Type":"ContainerStarted","Data":"6806ab254b28f4f630d190bc74c0e7ad7288489ec9b30f1d541597bc85aa059b"} Dec 06 07:15:06 crc kubenswrapper[4895]: I1206 07:15:06.030283 4895 generic.go:334] "Generic (PLEG): container finished" podID="86d20cc9-915a-4f28-802c-c2d656de5763" containerID="b99febae98800956023ffb340c379d062725418a317c61ef7417da98abf1d407" exitCode=0 Dec 06 07:15:06 crc kubenswrapper[4895]: I1206 07:15:06.030335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" event={"ID":"86d20cc9-915a-4f28-802c-c2d656de5763","Type":"ContainerDied","Data":"b99febae98800956023ffb340c379d062725418a317c61ef7417da98abf1d407"} Dec 06 07:15:23 crc kubenswrapper[4895]: I1206 07:15:23.178261 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" event={"ID":"86d20cc9-915a-4f28-802c-c2d656de5763","Type":"ContainerStarted","Data":"2640af6da9860121881a62fc7f1478d95fb861deaf6a0ae9d6beb68b6474ae5a"} Dec 06 07:15:24 crc kubenswrapper[4895]: I1206 07:15:24.185520 4895 generic.go:334] "Generic (PLEG): container finished" podID="86d20cc9-915a-4f28-802c-c2d656de5763" containerID="2640af6da9860121881a62fc7f1478d95fb861deaf6a0ae9d6beb68b6474ae5a" exitCode=0 Dec 06 07:15:24 crc kubenswrapper[4895]: I1206 07:15:24.185822 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" event={"ID":"86d20cc9-915a-4f28-802c-c2d656de5763","Type":"ContainerDied","Data":"2640af6da9860121881a62fc7f1478d95fb861deaf6a0ae9d6beb68b6474ae5a"} Dec 06 07:15:25 crc kubenswrapper[4895]: I1206 07:15:25.194094 4895 generic.go:334] "Generic (PLEG): container finished" podID="86d20cc9-915a-4f28-802c-c2d656de5763" containerID="df4e2f464c3a78fe4a98f6d7365dedc7d53eda622a3150159d01c16d939f1bdb" exitCode=0 Dec 06 07:15:25 crc kubenswrapper[4895]: I1206 07:15:25.194152 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" event={"ID":"86d20cc9-915a-4f28-802c-c2d656de5763","Type":"ContainerDied","Data":"df4e2f464c3a78fe4a98f6d7365dedc7d53eda622a3150159d01c16d939f1bdb"} Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.407552 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.534368 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-bundle\") pod \"86d20cc9-915a-4f28-802c-c2d656de5763\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.534464 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglfs\" (UniqueName: \"kubernetes.io/projected/86d20cc9-915a-4f28-802c-c2d656de5763-kube-api-access-fglfs\") pod \"86d20cc9-915a-4f28-802c-c2d656de5763\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.534510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-util\") pod \"86d20cc9-915a-4f28-802c-c2d656de5763\" (UID: \"86d20cc9-915a-4f28-802c-c2d656de5763\") " Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.536054 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-bundle" (OuterVolumeSpecName: "bundle") pod "86d20cc9-915a-4f28-802c-c2d656de5763" (UID: "86d20cc9-915a-4f28-802c-c2d656de5763"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.539033 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d20cc9-915a-4f28-802c-c2d656de5763-kube-api-access-fglfs" (OuterVolumeSpecName: "kube-api-access-fglfs") pod "86d20cc9-915a-4f28-802c-c2d656de5763" (UID: "86d20cc9-915a-4f28-802c-c2d656de5763"). InnerVolumeSpecName "kube-api-access-fglfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.546064 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-util" (OuterVolumeSpecName: "util") pod "86d20cc9-915a-4f28-802c-c2d656de5763" (UID: "86d20cc9-915a-4f28-802c-c2d656de5763"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.635650 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.635695 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglfs\" (UniqueName: \"kubernetes.io/projected/86d20cc9-915a-4f28-802c-c2d656de5763-kube-api-access-fglfs\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:26 crc kubenswrapper[4895]: I1206 07:15:26.635747 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d20cc9-915a-4f28-802c-c2d656de5763-util\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:27 crc kubenswrapper[4895]: I1206 07:15:27.207068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" event={"ID":"86d20cc9-915a-4f28-802c-c2d656de5763","Type":"ContainerDied","Data":"6806ab254b28f4f630d190bc74c0e7ad7288489ec9b30f1d541597bc85aa059b"} Dec 06 07:15:27 crc kubenswrapper[4895]: I1206 07:15:27.207104 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6806ab254b28f4f630d190bc74c0e7ad7288489ec9b30f1d541597bc85aa059b" Dec 06 07:15:27 crc kubenswrapper[4895]: I1206 07:15:27.207679 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch" Dec 06 07:15:29 crc kubenswrapper[4895]: I1206 07:15:29.695723 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:15:29 crc kubenswrapper[4895]: I1206 07:15:29.696004 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.242093 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf"] Dec 06 07:15:37 crc kubenswrapper[4895]: E1206 07:15:37.242927 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="pull" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.242944 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="pull" Dec 06 07:15:37 crc kubenswrapper[4895]: E1206 07:15:37.242959 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="extract" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.242967 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="extract" Dec 06 07:15:37 crc kubenswrapper[4895]: E1206 07:15:37.242979 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" containerName="collect-profiles" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.242988 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" containerName="collect-profiles" Dec 06 07:15:37 crc kubenswrapper[4895]: E1206 07:15:37.243000 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="util" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.243007 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="util" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.243115 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" containerName="collect-profiles" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.243134 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d20cc9-915a-4f28-802c-c2d656de5763" containerName="extract" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.243611 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.246298 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.247059 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.247153 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.248153 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mzq54" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.253927 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf"] Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.254263 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.374314 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fab2ce9-306f-4230-ab6a-be99e37aaeea-apiservice-cert\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.374718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fab2ce9-306f-4230-ab6a-be99e37aaeea-webhook-cert\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.374877 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvrlz\" (UniqueName: \"kubernetes.io/projected/6fab2ce9-306f-4230-ab6a-be99e37aaeea-kube-api-access-gvrlz\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.475935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fab2ce9-306f-4230-ab6a-be99e37aaeea-apiservice-cert\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.476195 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fab2ce9-306f-4230-ab6a-be99e37aaeea-webhook-cert\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.476307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvrlz\" (UniqueName: \"kubernetes.io/projected/6fab2ce9-306f-4230-ab6a-be99e37aaeea-kube-api-access-gvrlz\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.476909 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-55976579dc-68gpl"] Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.477625 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.479102 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7jn2n" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.479646 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.479831 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.485123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fab2ce9-306f-4230-ab6a-be99e37aaeea-apiservice-cert\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.485197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fab2ce9-306f-4230-ab6a-be99e37aaeea-webhook-cert\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.497457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvrlz\" (UniqueName: \"kubernetes.io/projected/6fab2ce9-306f-4230-ab6a-be99e37aaeea-kube-api-access-gvrlz\") pod \"metallb-operator-controller-manager-78ffd896db-f79hf\" (UID: \"6fab2ce9-306f-4230-ab6a-be99e37aaeea\") " pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.501400 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55976579dc-68gpl"] Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.565143 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.578307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xz9\" (UniqueName: \"kubernetes.io/projected/20e10bde-64c1-402d-914e-2bfeef28267e-kube-api-access-h5xz9\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.578364 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20e10bde-64c1-402d-914e-2bfeef28267e-webhook-cert\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.578468 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20e10bde-64c1-402d-914e-2bfeef28267e-apiservice-cert\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.682944 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20e10bde-64c1-402d-914e-2bfeef28267e-apiservice-cert\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.683198 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5xz9\" (UniqueName: \"kubernetes.io/projected/20e10bde-64c1-402d-914e-2bfeef28267e-kube-api-access-h5xz9\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.683222 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20e10bde-64c1-402d-914e-2bfeef28267e-webhook-cert\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.687733 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20e10bde-64c1-402d-914e-2bfeef28267e-apiservice-cert\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.693093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20e10bde-64c1-402d-914e-2bfeef28267e-webhook-cert\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.708462 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5xz9\" (UniqueName: \"kubernetes.io/projected/20e10bde-64c1-402d-914e-2bfeef28267e-kube-api-access-h5xz9\") pod \"metallb-operator-webhook-server-55976579dc-68gpl\" (UID: \"20e10bde-64c1-402d-914e-2bfeef28267e\") " pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.836721 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf"] Dec 06 07:15:37 crc kubenswrapper[4895]: I1206 07:15:37.842513 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:37 crc kubenswrapper[4895]: W1206 07:15:37.853571 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fab2ce9_306f_4230_ab6a_be99e37aaeea.slice/crio-833515eaf77b7a9d2b0d1c6f1c048165455e67444ed45b6b08f38ed031f9c9a8 WatchSource:0}: Error finding container 833515eaf77b7a9d2b0d1c6f1c048165455e67444ed45b6b08f38ed031f9c9a8: Status 404 returned error can't find the container with id 833515eaf77b7a9d2b0d1c6f1c048165455e67444ed45b6b08f38ed031f9c9a8 Dec 06 07:15:38 crc kubenswrapper[4895]: I1206 07:15:38.071785 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55976579dc-68gpl"] Dec 06 07:15:38 crc kubenswrapper[4895]: W1206 07:15:38.076591 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20e10bde_64c1_402d_914e_2bfeef28267e.slice/crio-c850f0702b8ca22d9f4a53f9f58ddbb0c2b8f1d27f1325e7659d5b94f8784e29 WatchSource:0}: Error finding container c850f0702b8ca22d9f4a53f9f58ddbb0c2b8f1d27f1325e7659d5b94f8784e29: Status 404 returned error can't find the container with id c850f0702b8ca22d9f4a53f9f58ddbb0c2b8f1d27f1325e7659d5b94f8784e29 Dec 06 07:15:38 crc kubenswrapper[4895]: I1206 07:15:38.269377 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" event={"ID":"6fab2ce9-306f-4230-ab6a-be99e37aaeea","Type":"ContainerStarted","Data":"833515eaf77b7a9d2b0d1c6f1c048165455e67444ed45b6b08f38ed031f9c9a8"} Dec 06 07:15:38 crc kubenswrapper[4895]: I1206 07:15:38.270198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" event={"ID":"20e10bde-64c1-402d-914e-2bfeef28267e","Type":"ContainerStarted","Data":"c850f0702b8ca22d9f4a53f9f58ddbb0c2b8f1d27f1325e7659d5b94f8784e29"} Dec 06 07:15:46 crc kubenswrapper[4895]: I1206 07:15:46.323960 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" event={"ID":"6fab2ce9-306f-4230-ab6a-be99e37aaeea","Type":"ContainerStarted","Data":"b02c49b95b8594da3c41cd96c1485559e0d65f4551938d2926beaaad8e42722b"} Dec 06 07:15:46 crc kubenswrapper[4895]: I1206 07:15:46.324530 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:15:46 crc kubenswrapper[4895]: I1206 07:15:46.345261 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" podStartSLOduration=1.325536252 podStartE2EDuration="9.345239267s" podCreationTimestamp="2025-12-06 07:15:37 +0000 UTC" firstStartedPulling="2025-12-06 07:15:37.860610828 +0000 UTC m=+1100.261999698" lastFinishedPulling="2025-12-06 07:15:45.880313843 +0000 UTC m=+1108.281702713" observedRunningTime="2025-12-06 07:15:46.33984239 +0000 UTC m=+1108.741231260" watchObservedRunningTime="2025-12-06 07:15:46.345239267 +0000 UTC m=+1108.746628147" Dec 06 07:15:47 crc kubenswrapper[4895]: I1206 07:15:47.331148 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" event={"ID":"20e10bde-64c1-402d-914e-2bfeef28267e","Type":"ContainerStarted","Data":"4488fe5545d07a71026e9bc06a11c0a76a37cc9187ffbb181e947791920dcf75"} Dec 06 07:15:47 crc kubenswrapper[4895]: I1206 07:15:47.356735 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" podStartSLOduration=2.123492204 podStartE2EDuration="10.356702637s" podCreationTimestamp="2025-12-06 07:15:37 +0000 UTC" firstStartedPulling="2025-12-06 07:15:38.08386045 +0000 UTC m=+1100.485249320" lastFinishedPulling="2025-12-06 07:15:46.317070883 +0000 UTC m=+1108.718459753" observedRunningTime="2025-12-06 07:15:47.356372357 +0000 UTC m=+1109.757761227" watchObservedRunningTime="2025-12-06 07:15:47.356702637 +0000 UTC m=+1109.758091507" Dec 06 07:15:47 crc kubenswrapper[4895]: I1206 07:15:47.843315 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:57 crc kubenswrapper[4895]: I1206 07:15:57.846631 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" Dec 06 07:15:59 crc kubenswrapper[4895]: I1206 07:15:59.696523 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:15:59 crc kubenswrapper[4895]: I1206 07:15:59.697302 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:16:17 crc kubenswrapper[4895]: I1206 07:16:17.569195 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78ffd896db-f79hf" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.352813 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.355520 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.363878 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.366401 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cxbxf" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.371998 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zfwsz"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.374960 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.375103 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.378260 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.378567 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.441189 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xmz5r"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.442654 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.444932 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.444961 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.445502 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sjmfc" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.445811 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460532 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26238b40-5288-4a03-80b8-a3400c9f5365-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-t5mgh\" (UID: \"26238b40-5288-4a03-80b8-a3400c9f5365\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460594 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5q9\" (UniqueName: \"kubernetes.io/projected/26238b40-5288-4a03-80b8-a3400c9f5365-kube-api-access-zx5q9\") pod \"frr-k8s-webhook-server-7fcb986d4-t5mgh\" (UID: \"26238b40-5288-4a03-80b8-a3400c9f5365\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460690 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460710 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-frr-conf\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-metrics-certs\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460861 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69aac7da-152a-4314-92fd-1f4aea0140be-frr-startup\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-metrics\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.460996 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/eafcf625-ed1f-442a-8bf3-b1b6c231d811-metallb-excludel2\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.461022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69aac7da-152a-4314-92fd-1f4aea0140be-metrics-certs\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.461037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnndz\" (UniqueName: \"kubernetes.io/projected/69aac7da-152a-4314-92fd-1f4aea0140be-kube-api-access-lnndz\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.461102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-reloader\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.461261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-frr-sockets\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.461340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8krf\" (UniqueName: \"kubernetes.io/projected/eafcf625-ed1f-442a-8bf3-b1b6c231d811-kube-api-access-s8krf\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.463575 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-cz9xl"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.464574 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.468851 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.476310 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-cz9xl"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563141 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-cert\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563190 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-metrics\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563219 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/eafcf625-ed1f-442a-8bf3-b1b6c231d811-metallb-excludel2\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563238 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnndz\" (UniqueName: \"kubernetes.io/projected/69aac7da-152a-4314-92fd-1f4aea0140be-kube-api-access-lnndz\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563253 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69aac7da-152a-4314-92fd-1f4aea0140be-metrics-certs\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563268 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2brt\" (UniqueName: \"kubernetes.io/projected/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-kube-api-access-k2brt\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563644 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-metrics\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-reloader\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-frr-sockets\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8krf\" (UniqueName: \"kubernetes.io/projected/eafcf625-ed1f-442a-8bf3-b1b6c231d811-kube-api-access-s8krf\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563819 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26238b40-5288-4a03-80b8-a3400c9f5365-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-t5mgh\" (UID: \"26238b40-5288-4a03-80b8-a3400c9f5365\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563839 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5q9\" (UniqueName: \"kubernetes.io/projected/26238b40-5288-4a03-80b8-a3400c9f5365-kube-api-access-zx5q9\") pod \"frr-k8s-webhook-server-7fcb986d4-t5mgh\" (UID: \"26238b40-5288-4a03-80b8-a3400c9f5365\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563882 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-metrics-certs\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563929 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-frr-conf\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.563971 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-metrics-certs\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.564000 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69aac7da-152a-4314-92fd-1f4aea0140be-frr-startup\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.564334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/eafcf625-ed1f-442a-8bf3-b1b6c231d811-metallb-excludel2\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.564976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69aac7da-152a-4314-92fd-1f4aea0140be-frr-startup\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: E1206 07:16:18.565506 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 07:16:18 crc kubenswrapper[4895]: E1206 07:16:18.565565 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist podName:eafcf625-ed1f-442a-8bf3-b1b6c231d811 nodeName:}" failed. No retries permitted until 2025-12-06 07:16:19.065547225 +0000 UTC m=+1141.466936095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist") pod "speaker-xmz5r" (UID: "eafcf625-ed1f-442a-8bf3-b1b6c231d811") : secret "metallb-memberlist" not found Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.565696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-reloader\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.566382 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-frr-sockets\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.567774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69aac7da-152a-4314-92fd-1f4aea0140be-frr-conf\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.570420 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69aac7da-152a-4314-92fd-1f4aea0140be-metrics-certs\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.570440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26238b40-5288-4a03-80b8-a3400c9f5365-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-t5mgh\" (UID: \"26238b40-5288-4a03-80b8-a3400c9f5365\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.574016 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-metrics-certs\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.582276 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8krf\" (UniqueName: \"kubernetes.io/projected/eafcf625-ed1f-442a-8bf3-b1b6c231d811-kube-api-access-s8krf\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.584509 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnndz\" (UniqueName: \"kubernetes.io/projected/69aac7da-152a-4314-92fd-1f4aea0140be-kube-api-access-lnndz\") pod \"frr-k8s-zfwsz\" (UID: \"69aac7da-152a-4314-92fd-1f4aea0140be\") " pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.584514 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5q9\" (UniqueName: \"kubernetes.io/projected/26238b40-5288-4a03-80b8-a3400c9f5365-kube-api-access-zx5q9\") pod \"frr-k8s-webhook-server-7fcb986d4-t5mgh\" (UID: \"26238b40-5288-4a03-80b8-a3400c9f5365\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.664681 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-cert\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.664740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2brt\" (UniqueName: \"kubernetes.io/projected/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-kube-api-access-k2brt\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.664806 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-metrics-certs\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: E1206 07:16:18.664996 4895 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 06 07:16:18 crc kubenswrapper[4895]: E1206 07:16:18.665059 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-metrics-certs podName:5e0bbbc9-5a2e-4e78-917b-e0ac820395fb nodeName:}" failed. No retries permitted until 2025-12-06 07:16:19.165039382 +0000 UTC m=+1141.566428252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-metrics-certs") pod "controller-f8648f98b-cz9xl" (UID: "5e0bbbc9-5a2e-4e78-917b-e0ac820395fb") : secret "controller-certs-secret" not found Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.667219 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.673106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.680372 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-cert\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.681100 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2brt\" (UniqueName: \"kubernetes.io/projected/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-kube-api-access-k2brt\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.690809 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.960434 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh"] Dec 06 07:16:18 crc kubenswrapper[4895]: I1206 07:16:18.972033 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:16:19 crc kubenswrapper[4895]: I1206 07:16:19.079780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:19 crc kubenswrapper[4895]: E1206 07:16:19.080028 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 07:16:19 crc kubenswrapper[4895]: E1206 07:16:19.080082 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist podName:eafcf625-ed1f-442a-8bf3-b1b6c231d811 nodeName:}" failed. No retries permitted until 2025-12-06 07:16:20.080065894 +0000 UTC m=+1142.481454764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist") pod "speaker-xmz5r" (UID: "eafcf625-ed1f-442a-8bf3-b1b6c231d811") : secret "metallb-memberlist" not found Dec 06 07:16:19 crc kubenswrapper[4895]: I1206 07:16:19.181031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-metrics-certs\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:19 crc kubenswrapper[4895]: I1206 07:16:19.188458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e0bbbc9-5a2e-4e78-917b-e0ac820395fb-metrics-certs\") pod \"controller-f8648f98b-cz9xl\" (UID: \"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb\") " pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:19 crc kubenswrapper[4895]: I1206 07:16:19.387328 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:19 crc kubenswrapper[4895]: I1206 07:16:19.552282 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" event={"ID":"26238b40-5288-4a03-80b8-a3400c9f5365","Type":"ContainerStarted","Data":"0285cb4771b46b89629ca1e29db74ec397c9e600bd44cccc9a8ca5499d60f226"} Dec 06 07:16:19 crc kubenswrapper[4895]: I1206 07:16:19.617033 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-cz9xl"] Dec 06 07:16:19 crc kubenswrapper[4895]: W1206 07:16:19.629249 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e0bbbc9_5a2e_4e78_917b_e0ac820395fb.slice/crio-30a66e1c339cb82a927c4dc0de046a8819a501be936cbc685076a75217b18d43 WatchSource:0}: Error finding container 30a66e1c339cb82a927c4dc0de046a8819a501be936cbc685076a75217b18d43: Status 404 returned error can't find the container with id 30a66e1c339cb82a927c4dc0de046a8819a501be936cbc685076a75217b18d43 Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.102391 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:20 crc kubenswrapper[4895]: E1206 07:16:20.102593 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 07:16:20 crc kubenswrapper[4895]: E1206 07:16:20.102653 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist podName:eafcf625-ed1f-442a-8bf3-b1b6c231d811 nodeName:}" failed. No retries permitted until 2025-12-06 07:16:22.102638875 +0000 UTC m=+1144.504027745 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist") pod "speaker-xmz5r" (UID: "eafcf625-ed1f-442a-8bf3-b1b6c231d811") : secret "metallb-memberlist" not found Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.578017 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cz9xl" event={"ID":"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb","Type":"ContainerStarted","Data":"1410ad8f5a568e6d76227178992dd70c41827f366e0035a7a816ca472bfe5f1c"} Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.578288 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cz9xl" event={"ID":"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb","Type":"ContainerStarted","Data":"9a43a15a81e519c534367c29f797a94cd4b0ce27361394a45efa1546277bacc8"} Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.578308 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.578318 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cz9xl" event={"ID":"5e0bbbc9-5a2e-4e78-917b-e0ac820395fb","Type":"ContainerStarted","Data":"30a66e1c339cb82a927c4dc0de046a8819a501be936cbc685076a75217b18d43"} Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.587507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"8f7e5ff48866dc3f4edcf81016ea3a0a7f5c2d5034e9fbc666a6565931c4141e"} Dec 06 07:16:20 crc kubenswrapper[4895]: I1206 07:16:20.616128 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-cz9xl" podStartSLOduration=2.6160963539999997 podStartE2EDuration="2.616096354s" podCreationTimestamp="2025-12-06 07:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:16:20.611544141 +0000 UTC m=+1143.012933011" watchObservedRunningTime="2025-12-06 07:16:20.616096354 +0000 UTC m=+1143.017485224" Dec 06 07:16:22 crc kubenswrapper[4895]: I1206 07:16:22.131565 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:22 crc kubenswrapper[4895]: I1206 07:16:22.140071 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eafcf625-ed1f-442a-8bf3-b1b6c231d811-memberlist\") pod \"speaker-xmz5r\" (UID: \"eafcf625-ed1f-442a-8bf3-b1b6c231d811\") " pod="metallb-system/speaker-xmz5r" Dec 06 07:16:22 crc kubenswrapper[4895]: I1206 07:16:22.365330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xmz5r" Dec 06 07:16:22 crc kubenswrapper[4895]: W1206 07:16:22.406685 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeafcf625_ed1f_442a_8bf3_b1b6c231d811.slice/crio-632859cea22435c5165d3915a7aeff30a52e11a426aaa15aa08b3099de23c5d9 WatchSource:0}: Error finding container 632859cea22435c5165d3915a7aeff30a52e11a426aaa15aa08b3099de23c5d9: Status 404 returned error can't find the container with id 632859cea22435c5165d3915a7aeff30a52e11a426aaa15aa08b3099de23c5d9 Dec 06 07:16:22 crc kubenswrapper[4895]: I1206 07:16:22.604753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xmz5r" event={"ID":"eafcf625-ed1f-442a-8bf3-b1b6c231d811","Type":"ContainerStarted","Data":"632859cea22435c5165d3915a7aeff30a52e11a426aaa15aa08b3099de23c5d9"} Dec 06 07:16:23 crc kubenswrapper[4895]: I1206 07:16:23.629239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xmz5r" event={"ID":"eafcf625-ed1f-442a-8bf3-b1b6c231d811","Type":"ContainerStarted","Data":"1ce642d469c610728f21ffc3ac4f2fe600ce523eadbe7ff7a4cb991f4cc38992"} Dec 06 07:16:23 crc kubenswrapper[4895]: I1206 07:16:23.661200 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xmz5r" event={"ID":"eafcf625-ed1f-442a-8bf3-b1b6c231d811","Type":"ContainerStarted","Data":"6af42d4ab4c9af035c669334aff467a622d67a49f029414ca47f8dd1492f6ae4"} Dec 06 07:16:23 crc kubenswrapper[4895]: I1206 07:16:23.661290 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xmz5r" Dec 06 07:16:23 crc kubenswrapper[4895]: I1206 07:16:23.702116 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xmz5r" podStartSLOduration=5.702089502 podStartE2EDuration="5.702089502s" podCreationTimestamp="2025-12-06 07:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:16:23.688871734 +0000 UTC m=+1146.090260604" watchObservedRunningTime="2025-12-06 07:16:23.702089502 +0000 UTC m=+1146.103478372" Dec 06 07:16:27 crc kubenswrapper[4895]: I1206 07:16:27.665791 4895 generic.go:334] "Generic (PLEG): container finished" podID="69aac7da-152a-4314-92fd-1f4aea0140be" containerID="7c3e520499df37dcf39b20884c8dd8ec9c40679ee7deb28a188a59beda5e5357" exitCode=0 Dec 06 07:16:27 crc kubenswrapper[4895]: I1206 07:16:27.665862 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerDied","Data":"7c3e520499df37dcf39b20884c8dd8ec9c40679ee7deb28a188a59beda5e5357"} Dec 06 07:16:27 crc kubenswrapper[4895]: I1206 07:16:27.668598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" event={"ID":"26238b40-5288-4a03-80b8-a3400c9f5365","Type":"ContainerStarted","Data":"2ba324e5b748e64cbbcaa4d081a0d65d7dc8342cc816a349de7ed08c3cb3ca10"} Dec 06 07:16:27 crc kubenswrapper[4895]: I1206 07:16:27.668783 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:27 crc kubenswrapper[4895]: I1206 07:16:27.747319 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" podStartSLOduration=1.357475033 podStartE2EDuration="9.747296352s" podCreationTimestamp="2025-12-06 07:16:18 +0000 UTC" firstStartedPulling="2025-12-06 07:16:18.97184242 +0000 UTC m=+1141.373231290" lastFinishedPulling="2025-12-06 07:16:27.361663739 +0000 UTC m=+1149.763052609" observedRunningTime="2025-12-06 07:16:27.743388026 +0000 UTC m=+1150.144776906" watchObservedRunningTime="2025-12-06 07:16:27.747296352 +0000 UTC m=+1150.148685222" Dec 06 07:16:28 crc kubenswrapper[4895]: I1206 07:16:28.681632 4895 generic.go:334] "Generic (PLEG): container finished" podID="69aac7da-152a-4314-92fd-1f4aea0140be" containerID="f3943273349708ea594d442c96b6632513cab2e4138af77dbb7400606facaa86" exitCode=0 Dec 06 07:16:28 crc kubenswrapper[4895]: I1206 07:16:28.681750 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerDied","Data":"f3943273349708ea594d442c96b6632513cab2e4138af77dbb7400606facaa86"} Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.693908 4895 generic.go:334] "Generic (PLEG): container finished" podID="69aac7da-152a-4314-92fd-1f4aea0140be" containerID="8f6e7cbf5c95ca3c8230f8cf11de9512588a068f42688ff8f33500f14ec56faa" exitCode=0 Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.693966 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerDied","Data":"8f6e7cbf5c95ca3c8230f8cf11de9512588a068f42688ff8f33500f14ec56faa"} Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.695414 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.695540 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.695609 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.696737 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"663e0971fd031efe73576c4d0575b9ee19ff771a93759108ec40df72da6692c9"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:16:29 crc kubenswrapper[4895]: I1206 07:16:29.696877 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://663e0971fd031efe73576c4d0575b9ee19ff771a93759108ec40df72da6692c9" gracePeriod=600 Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.701392 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="663e0971fd031efe73576c4d0575b9ee19ff771a93759108ec40df72da6692c9" exitCode=0 Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.701465 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"663e0971fd031efe73576c4d0575b9ee19ff771a93759108ec40df72da6692c9"} Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.702144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"6ccc9113d0ff0776606793bc5166b14a5fc6157da50c2a82c90bf46796771601"} Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.702169 4895 scope.go:117] "RemoveContainer" containerID="0fca2ab370dac1142fa441cec2ee41930eac4b7f4cc0496ffb43ffe8ce0a4b9a" Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.712007 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"3f13502ec28899c4c89bc348a3d9fdf997ee49ad3982ab75b06eff6c183b683c"} Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.712060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"90728ff4bc81ecbbbb9ad4d69d560a3e5cbdd444f8d8e7e37d4452d03a061609"} Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.712071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"cb50b3116f7dd6db9cb8d2554e067f1259e0965b66a467271a02ab032fb8c3f4"} Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.712080 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"aea365e98ed1a3f49416995771e66e7a7d631ddda90c73a484f48589aa77d6c2"} Dec 06 07:16:30 crc kubenswrapper[4895]: I1206 07:16:30.712090 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"3bd09d380d19898989e923356578fa9bdcd3268ea4866ab4eb13ca96ac76ee7a"} Dec 06 07:16:31 crc kubenswrapper[4895]: I1206 07:16:31.724335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zfwsz" event={"ID":"69aac7da-152a-4314-92fd-1f4aea0140be","Type":"ContainerStarted","Data":"6002ab79f7ee5e55c348f991485963a81bb6947cef9d8e2c08afcc9008d1c7df"} Dec 06 07:16:31 crc kubenswrapper[4895]: I1206 07:16:31.724984 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:31 crc kubenswrapper[4895]: I1206 07:16:31.746984 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zfwsz" podStartSLOduration=6.660138993 podStartE2EDuration="13.746967879s" podCreationTimestamp="2025-12-06 07:16:18 +0000 UTC" firstStartedPulling="2025-12-06 07:16:20.307179539 +0000 UTC m=+1142.708568419" lastFinishedPulling="2025-12-06 07:16:27.394008435 +0000 UTC m=+1149.795397305" observedRunningTime="2025-12-06 07:16:31.745439348 +0000 UTC m=+1154.146828218" watchObservedRunningTime="2025-12-06 07:16:31.746967879 +0000 UTC m=+1154.148356749" Dec 06 07:16:32 crc kubenswrapper[4895]: I1206 07:16:32.369104 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xmz5r" Dec 06 07:16:33 crc kubenswrapper[4895]: I1206 07:16:33.691406 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:33 crc kubenswrapper[4895]: I1206 07:16:33.741272 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.020964 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n"] Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.022016 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.024158 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.033018 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n"] Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.161247 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ncch\" (UniqueName: \"kubernetes.io/projected/149dfbf8-e301-4bee-b295-7fd74dfd4df1-kube-api-access-6ncch\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.161665 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.161699 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.263142 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ncch\" (UniqueName: \"kubernetes.io/projected/149dfbf8-e301-4bee-b295-7fd74dfd4df1-kube-api-access-6ncch\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.263208 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.263228 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.264062 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.264343 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.286252 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ncch\" (UniqueName: \"kubernetes.io/projected/149dfbf8-e301-4bee-b295-7fd74dfd4df1-kube-api-access-6ncch\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.336686 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.606292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n"] Dec 06 07:16:34 crc kubenswrapper[4895]: I1206 07:16:34.744459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" event={"ID":"149dfbf8-e301-4bee-b295-7fd74dfd4df1","Type":"ContainerStarted","Data":"05a13c524152518b8f0a793e9cca56f9a276df22a34e44797bccebdba61d172e"} Dec 06 07:16:35 crc kubenswrapper[4895]: I1206 07:16:35.762197 4895 generic.go:334] "Generic (PLEG): container finished" podID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerID="9a7a6fdfd24fc9fd606a77a36a19742d34eb4563868979d6c6b26303769fc985" exitCode=0 Dec 06 07:16:35 crc kubenswrapper[4895]: I1206 07:16:35.762610 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" event={"ID":"149dfbf8-e301-4bee-b295-7fd74dfd4df1","Type":"ContainerDied","Data":"9a7a6fdfd24fc9fd606a77a36a19742d34eb4563868979d6c6b26303769fc985"} Dec 06 07:16:38 crc kubenswrapper[4895]: I1206 07:16:38.677379 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t5mgh" Dec 06 07:16:39 crc kubenswrapper[4895]: I1206 07:16:39.392268 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-cz9xl" Dec 06 07:16:39 crc kubenswrapper[4895]: I1206 07:16:39.796807 4895 generic.go:334] "Generic (PLEG): container finished" podID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerID="201147e7095aca60e5a19b87199fbcfd91a88db6b05c45cb937e94fb9b0f1734" exitCode=0 Dec 06 07:16:39 crc kubenswrapper[4895]: I1206 07:16:39.796919 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" event={"ID":"149dfbf8-e301-4bee-b295-7fd74dfd4df1","Type":"ContainerDied","Data":"201147e7095aca60e5a19b87199fbcfd91a88db6b05c45cb937e94fb9b0f1734"} Dec 06 07:16:40 crc kubenswrapper[4895]: I1206 07:16:40.806569 4895 generic.go:334] "Generic (PLEG): container finished" podID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerID="1ed66fb55cced4dc2a119cd59b5229e20a9617e28b9e28cca2afffc93f465946" exitCode=0 Dec 06 07:16:40 crc kubenswrapper[4895]: I1206 07:16:40.806658 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" event={"ID":"149dfbf8-e301-4bee-b295-7fd74dfd4df1","Type":"ContainerDied","Data":"1ed66fb55cced4dc2a119cd59b5229e20a9617e28b9e28cca2afffc93f465946"} Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.076561 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.180369 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-bundle\") pod \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.180434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ncch\" (UniqueName: \"kubernetes.io/projected/149dfbf8-e301-4bee-b295-7fd74dfd4df1-kube-api-access-6ncch\") pod \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.180528 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-util\") pod \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\" (UID: \"149dfbf8-e301-4bee-b295-7fd74dfd4df1\") " Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.181889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-bundle" (OuterVolumeSpecName: "bundle") pod "149dfbf8-e301-4bee-b295-7fd74dfd4df1" (UID: "149dfbf8-e301-4bee-b295-7fd74dfd4df1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.187246 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149dfbf8-e301-4bee-b295-7fd74dfd4df1-kube-api-access-6ncch" (OuterVolumeSpecName: "kube-api-access-6ncch") pod "149dfbf8-e301-4bee-b295-7fd74dfd4df1" (UID: "149dfbf8-e301-4bee-b295-7fd74dfd4df1"). InnerVolumeSpecName "kube-api-access-6ncch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.191041 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-util" (OuterVolumeSpecName: "util") pod "149dfbf8-e301-4bee-b295-7fd74dfd4df1" (UID: "149dfbf8-e301-4bee-b295-7fd74dfd4df1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.282556 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-util\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.282611 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/149dfbf8-e301-4bee-b295-7fd74dfd4df1-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.282625 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ncch\" (UniqueName: \"kubernetes.io/projected/149dfbf8-e301-4bee-b295-7fd74dfd4df1-kube-api-access-6ncch\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.823501 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.823501 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n" event={"ID":"149dfbf8-e301-4bee-b295-7fd74dfd4df1","Type":"ContainerDied","Data":"05a13c524152518b8f0a793e9cca56f9a276df22a34e44797bccebdba61d172e"} Dec 06 07:16:42 crc kubenswrapper[4895]: I1206 07:16:42.823675 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a13c524152518b8f0a793e9cca56f9a276df22a34e44797bccebdba61d172e" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.720736 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9"] Dec 06 07:16:46 crc kubenswrapper[4895]: E1206 07:16:46.723150 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="extract" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.723245 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="extract" Dec 06 07:16:46 crc kubenswrapper[4895]: E1206 07:16:46.723342 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="pull" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.723428 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="pull" Dec 06 07:16:46 crc kubenswrapper[4895]: E1206 07:16:46.723524 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="util" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.723591 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="util" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.723772 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="149dfbf8-e301-4bee-b295-7fd74dfd4df1" containerName="extract" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.724450 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.728026 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-j4z6g" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.729000 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.729438 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.740167 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9"] Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.789604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9dd\" (UniqueName: \"kubernetes.io/projected/c4197820-de3f-4c3e-90cb-fb8b4829905d-kube-api-access-np9dd\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5qkr9\" (UID: \"c4197820-de3f-4c3e-90cb-fb8b4829905d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.789665 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4197820-de3f-4c3e-90cb-fb8b4829905d-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5qkr9\" (UID: \"c4197820-de3f-4c3e-90cb-fb8b4829905d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.891229 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9dd\" (UniqueName: \"kubernetes.io/projected/c4197820-de3f-4c3e-90cb-fb8b4829905d-kube-api-access-np9dd\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5qkr9\" (UID: \"c4197820-de3f-4c3e-90cb-fb8b4829905d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.891273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4197820-de3f-4c3e-90cb-fb8b4829905d-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5qkr9\" (UID: \"c4197820-de3f-4c3e-90cb-fb8b4829905d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.891773 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4197820-de3f-4c3e-90cb-fb8b4829905d-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5qkr9\" (UID: \"c4197820-de3f-4c3e-90cb-fb8b4829905d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:46 crc kubenswrapper[4895]: I1206 07:16:46.910812 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9dd\" (UniqueName: \"kubernetes.io/projected/c4197820-de3f-4c3e-90cb-fb8b4829905d-kube-api-access-np9dd\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5qkr9\" (UID: \"c4197820-de3f-4c3e-90cb-fb8b4829905d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:47 crc kubenswrapper[4895]: I1206 07:16:47.042458 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" Dec 06 07:16:47 crc kubenswrapper[4895]: I1206 07:16:47.682122 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9"] Dec 06 07:16:47 crc kubenswrapper[4895]: W1206 07:16:47.707933 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4197820_de3f_4c3e_90cb_fb8b4829905d.slice/crio-6a333565679aafc48b7205c152a813a3c64c7ee7daf22209bdac02decef3d53d WatchSource:0}: Error finding container 6a333565679aafc48b7205c152a813a3c64c7ee7daf22209bdac02decef3d53d: Status 404 returned error can't find the container with id 6a333565679aafc48b7205c152a813a3c64c7ee7daf22209bdac02decef3d53d Dec 06 07:16:47 crc kubenswrapper[4895]: I1206 07:16:47.958157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" event={"ID":"c4197820-de3f-4c3e-90cb-fb8b4829905d","Type":"ContainerStarted","Data":"6a333565679aafc48b7205c152a813a3c64c7ee7daf22209bdac02decef3d53d"} Dec 06 07:16:48 crc kubenswrapper[4895]: I1206 07:16:48.693848 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zfwsz" Dec 06 07:16:58 crc kubenswrapper[4895]: I1206 07:16:58.027334 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" event={"ID":"c4197820-de3f-4c3e-90cb-fb8b4829905d","Type":"ContainerStarted","Data":"628ca8e253f9e7d9b134f574fdcbe4acc6de9809b6591986f2bd7a199e72e165"} Dec 06 07:16:58 crc kubenswrapper[4895]: I1206 07:16:58.048241 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5qkr9" podStartSLOduration=1.952544654 podStartE2EDuration="12.048217037s" podCreationTimestamp="2025-12-06 07:16:46 +0000 UTC" firstStartedPulling="2025-12-06 07:16:47.715709734 +0000 UTC m=+1170.117098614" lastFinishedPulling="2025-12-06 07:16:57.811382127 +0000 UTC m=+1180.212770997" observedRunningTime="2025-12-06 07:16:58.042568494 +0000 UTC m=+1180.443957374" watchObservedRunningTime="2025-12-06 07:16:58.048217037 +0000 UTC m=+1180.449605907" Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.806097 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lbntl"] Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.807323 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.809104 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t8ltr" Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.809427 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.810456 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.819430 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lbntl"] Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.906208 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b68c3ddb-a16d-4c76-bd50-b8117170b7a7-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lbntl\" (UID: \"b68c3ddb-a16d-4c76-bd50-b8117170b7a7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:00 crc kubenswrapper[4895]: I1206 07:17:00.906266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4zr\" (UniqueName: \"kubernetes.io/projected/b68c3ddb-a16d-4c76-bd50-b8117170b7a7-kube-api-access-7c4zr\") pod \"cert-manager-webhook-f4fb5df64-lbntl\" (UID: \"b68c3ddb-a16d-4c76-bd50-b8117170b7a7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:01 crc kubenswrapper[4895]: I1206 07:17:01.007928 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b68c3ddb-a16d-4c76-bd50-b8117170b7a7-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lbntl\" (UID: \"b68c3ddb-a16d-4c76-bd50-b8117170b7a7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:01 crc kubenswrapper[4895]: I1206 07:17:01.008025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4zr\" (UniqueName: \"kubernetes.io/projected/b68c3ddb-a16d-4c76-bd50-b8117170b7a7-kube-api-access-7c4zr\") pod \"cert-manager-webhook-f4fb5df64-lbntl\" (UID: \"b68c3ddb-a16d-4c76-bd50-b8117170b7a7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:01 crc kubenswrapper[4895]: I1206 07:17:01.029427 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4zr\" (UniqueName: \"kubernetes.io/projected/b68c3ddb-a16d-4c76-bd50-b8117170b7a7-kube-api-access-7c4zr\") pod \"cert-manager-webhook-f4fb5df64-lbntl\" (UID: \"b68c3ddb-a16d-4c76-bd50-b8117170b7a7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:01 crc kubenswrapper[4895]: I1206 07:17:01.030221 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b68c3ddb-a16d-4c76-bd50-b8117170b7a7-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lbntl\" (UID: \"b68c3ddb-a16d-4c76-bd50-b8117170b7a7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:01 crc kubenswrapper[4895]: I1206 07:17:01.128543 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:01 crc kubenswrapper[4895]: I1206 07:17:01.636166 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lbntl"] Dec 06 07:17:02 crc kubenswrapper[4895]: I1206 07:17:02.109177 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" event={"ID":"b68c3ddb-a16d-4c76-bd50-b8117170b7a7","Type":"ContainerStarted","Data":"0832b8d8fac070c9b26080b4c948c3da078bb5f8ec107578f12a2a013acc05ad"} Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.513424 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-mf96z"] Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.517726 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.520935 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hw4tq" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.525420 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-mf96z"] Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.675801 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgvbb\" (UniqueName: \"kubernetes.io/projected/f600c442-f2c6-4b75-9945-def8b809dcb4-kube-api-access-fgvbb\") pod \"cert-manager-cainjector-855d9ccff4-mf96z\" (UID: \"f600c442-f2c6-4b75-9945-def8b809dcb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.676014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f600c442-f2c6-4b75-9945-def8b809dcb4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-mf96z\" (UID: \"f600c442-f2c6-4b75-9945-def8b809dcb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.778163 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgvbb\" (UniqueName: \"kubernetes.io/projected/f600c442-f2c6-4b75-9945-def8b809dcb4-kube-api-access-fgvbb\") pod \"cert-manager-cainjector-855d9ccff4-mf96z\" (UID: \"f600c442-f2c6-4b75-9945-def8b809dcb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.778229 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f600c442-f2c6-4b75-9945-def8b809dcb4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-mf96z\" (UID: \"f600c442-f2c6-4b75-9945-def8b809dcb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.805544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgvbb\" (UniqueName: \"kubernetes.io/projected/f600c442-f2c6-4b75-9945-def8b809dcb4-kube-api-access-fgvbb\") pod \"cert-manager-cainjector-855d9ccff4-mf96z\" (UID: \"f600c442-f2c6-4b75-9945-def8b809dcb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.810450 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f600c442-f2c6-4b75-9945-def8b809dcb4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-mf96z\" (UID: \"f600c442-f2c6-4b75-9945-def8b809dcb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:04 crc kubenswrapper[4895]: I1206 07:17:04.856632 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" Dec 06 07:17:06 crc kubenswrapper[4895]: I1206 07:17:06.082078 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-mf96z"] Dec 06 07:17:11 crc kubenswrapper[4895]: I1206 07:17:11.908310 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hk759"] Dec 06 07:17:11 crc kubenswrapper[4895]: I1206 07:17:11.909679 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:11 crc kubenswrapper[4895]: I1206 07:17:11.914070 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-z9w7d" Dec 06 07:17:11 crc kubenswrapper[4895]: I1206 07:17:11.926340 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hk759"] Dec 06 07:17:11 crc kubenswrapper[4895]: I1206 07:17:11.942567 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb13f97d-48b1-4846-854c-d0af6ae35951-bound-sa-token\") pod \"cert-manager-86cb77c54b-hk759\" (UID: \"fb13f97d-48b1-4846-854c-d0af6ae35951\") " pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:11 crc kubenswrapper[4895]: I1206 07:17:11.942621 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n728t\" (UniqueName: \"kubernetes.io/projected/fb13f97d-48b1-4846-854c-d0af6ae35951-kube-api-access-n728t\") pod \"cert-manager-86cb77c54b-hk759\" (UID: \"fb13f97d-48b1-4846-854c-d0af6ae35951\") " pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:12 crc kubenswrapper[4895]: I1206 07:17:12.043936 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb13f97d-48b1-4846-854c-d0af6ae35951-bound-sa-token\") pod \"cert-manager-86cb77c54b-hk759\" (UID: \"fb13f97d-48b1-4846-854c-d0af6ae35951\") " pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:12 crc kubenswrapper[4895]: I1206 07:17:12.044286 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n728t\" (UniqueName: \"kubernetes.io/projected/fb13f97d-48b1-4846-854c-d0af6ae35951-kube-api-access-n728t\") pod \"cert-manager-86cb77c54b-hk759\" (UID: \"fb13f97d-48b1-4846-854c-d0af6ae35951\") " pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:12 crc kubenswrapper[4895]: I1206 07:17:12.062673 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n728t\" (UniqueName: \"kubernetes.io/projected/fb13f97d-48b1-4846-854c-d0af6ae35951-kube-api-access-n728t\") pod \"cert-manager-86cb77c54b-hk759\" (UID: \"fb13f97d-48b1-4846-854c-d0af6ae35951\") " pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:12 crc kubenswrapper[4895]: I1206 07:17:12.065981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb13f97d-48b1-4846-854c-d0af6ae35951-bound-sa-token\") pod \"cert-manager-86cb77c54b-hk759\" (UID: \"fb13f97d-48b1-4846-854c-d0af6ae35951\") " pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:12 crc kubenswrapper[4895]: I1206 07:17:12.229154 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-hk759" Dec 06 07:17:17 crc kubenswrapper[4895]: W1206 07:17:17.814648 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf600c442_f2c6_4b75_9945_def8b809dcb4.slice/crio-af3d9baa10c835b4895da96304deb24b0fca9928b205e87a9dcceccd058de168 WatchSource:0}: Error finding container af3d9baa10c835b4895da96304deb24b0fca9928b205e87a9dcceccd058de168: Status 404 returned error can't find the container with id af3d9baa10c835b4895da96304deb24b0fca9928b205e87a9dcceccd058de168 Dec 06 07:17:17 crc kubenswrapper[4895]: E1206 07:17:17.862502 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 06 07:17:17 crc kubenswrapper[4895]: E1206 07:17:17.863076 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c4zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-lbntl_cert-manager(b68c3ddb-a16d-4c76-bd50-b8117170b7a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 07:17:17 crc kubenswrapper[4895]: E1206 07:17:17.864632 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" podUID="b68c3ddb-a16d-4c76-bd50-b8117170b7a7" Dec 06 07:17:18 crc kubenswrapper[4895]: I1206 07:17:18.298602 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" event={"ID":"f600c442-f2c6-4b75-9945-def8b809dcb4","Type":"ContainerStarted","Data":"af3d9baa10c835b4895da96304deb24b0fca9928b205e87a9dcceccd058de168"} Dec 06 07:17:18 crc kubenswrapper[4895]: E1206 07:17:18.299600 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" podUID="b68c3ddb-a16d-4c76-bd50-b8117170b7a7" Dec 06 07:17:18 crc kubenswrapper[4895]: I1206 07:17:18.306115 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hk759"] Dec 06 07:17:19 crc kubenswrapper[4895]: I1206 07:17:19.305140 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-hk759" event={"ID":"fb13f97d-48b1-4846-854c-d0af6ae35951","Type":"ContainerStarted","Data":"0d53e5be45799ffd670f40499be4f93f43f4b7ebd34fd73af152d64156b8f8ad"} Dec 06 07:17:30 crc kubenswrapper[4895]: I1206 07:17:30.371042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" event={"ID":"f600c442-f2c6-4b75-9945-def8b809dcb4","Type":"ContainerStarted","Data":"d6918465eba2f4648cdd3872bc5d4cc19dcbec0f2239070c846ff797358a6b56"} Dec 06 07:17:30 crc kubenswrapper[4895]: I1206 07:17:30.372899 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-hk759" event={"ID":"fb13f97d-48b1-4846-854c-d0af6ae35951","Type":"ContainerStarted","Data":"41b63adfecae4d54810f7f569039f951449651cdff74225b30c3a5e521d57225"} Dec 06 07:17:30 crc kubenswrapper[4895]: I1206 07:17:30.409837 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mf96z" podStartSLOduration=14.625186186 podStartE2EDuration="26.409787086s" podCreationTimestamp="2025-12-06 07:17:04 +0000 UTC" firstStartedPulling="2025-12-06 07:17:17.821707276 +0000 UTC m=+1200.223096146" lastFinishedPulling="2025-12-06 07:17:29.606308176 +0000 UTC m=+1212.007697046" observedRunningTime="2025-12-06 07:17:30.392368055 +0000 UTC m=+1212.793756925" watchObservedRunningTime="2025-12-06 07:17:30.409787086 +0000 UTC m=+1212.811175956" Dec 06 07:17:31 crc kubenswrapper[4895]: I1206 07:17:31.380814 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" event={"ID":"b68c3ddb-a16d-4c76-bd50-b8117170b7a7","Type":"ContainerStarted","Data":"fbb17d5f0195caaaa92598c16ba0332108908101627e1ccd75ddd7e9f62127bc"} Dec 06 07:17:31 crc kubenswrapper[4895]: I1206 07:17:31.400434 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-hk759" podStartSLOduration=9.104028957 podStartE2EDuration="20.400408771s" podCreationTimestamp="2025-12-06 07:17:11 +0000 UTC" firstStartedPulling="2025-12-06 07:17:18.310669161 +0000 UTC m=+1200.712058031" lastFinishedPulling="2025-12-06 07:17:29.607048975 +0000 UTC m=+1212.008437845" observedRunningTime="2025-12-06 07:17:30.416144399 +0000 UTC m=+1212.817533259" watchObservedRunningTime="2025-12-06 07:17:31.400408771 +0000 UTC m=+1213.801797641" Dec 06 07:17:31 crc kubenswrapper[4895]: I1206 07:17:31.402165 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" podStartSLOduration=-9223372005.452625 podStartE2EDuration="31.402151478s" podCreationTimestamp="2025-12-06 07:17:00 +0000 UTC" firstStartedPulling="2025-12-06 07:17:01.648102926 +0000 UTC m=+1184.049491796" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:17:31.396957237 +0000 UTC m=+1213.798346107" watchObservedRunningTime="2025-12-06 07:17:31.402151478 +0000 UTC m=+1213.803540348" Dec 06 07:17:36 crc kubenswrapper[4895]: I1206 07:17:36.128828 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:36 crc kubenswrapper[4895]: I1206 07:17:36.131381 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbntl" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.628878 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rgp22"] Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.630413 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.640759 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-f7gj5" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.640859 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.643809 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.651950 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rgp22"] Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.772843 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7n9\" (UniqueName: \"kubernetes.io/projected/541e80b1-619d-4cab-93a7-e46dfe21a70e-kube-api-access-4x7n9\") pod \"openstack-operator-index-rgp22\" (UID: \"541e80b1-619d-4cab-93a7-e46dfe21a70e\") " pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.874395 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x7n9\" (UniqueName: \"kubernetes.io/projected/541e80b1-619d-4cab-93a7-e46dfe21a70e-kube-api-access-4x7n9\") pod \"openstack-operator-index-rgp22\" (UID: \"541e80b1-619d-4cab-93a7-e46dfe21a70e\") " pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.895901 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x7n9\" (UniqueName: \"kubernetes.io/projected/541e80b1-619d-4cab-93a7-e46dfe21a70e-kube-api-access-4x7n9\") pod \"openstack-operator-index-rgp22\" (UID: \"541e80b1-619d-4cab-93a7-e46dfe21a70e\") " pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:39 crc kubenswrapper[4895]: I1206 07:17:39.948014 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:40 crc kubenswrapper[4895]: I1206 07:17:40.202043 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rgp22"] Dec 06 07:17:40 crc kubenswrapper[4895]: I1206 07:17:40.440173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rgp22" event={"ID":"541e80b1-619d-4cab-93a7-e46dfe21a70e","Type":"ContainerStarted","Data":"fd399496a4fcc0449358dc093c2d8231639cdf442e1bc5206977ac7d1332ddfb"} Dec 06 07:17:42 crc kubenswrapper[4895]: I1206 07:17:42.455447 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rgp22" event={"ID":"541e80b1-619d-4cab-93a7-e46dfe21a70e","Type":"ContainerStarted","Data":"931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318"} Dec 06 07:17:42 crc kubenswrapper[4895]: I1206 07:17:42.479542 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rgp22" podStartSLOduration=1.965553702 podStartE2EDuration="3.479519174s" podCreationTimestamp="2025-12-06 07:17:39 +0000 UTC" firstStartedPulling="2025-12-06 07:17:40.209428874 +0000 UTC m=+1222.610817744" lastFinishedPulling="2025-12-06 07:17:41.723394346 +0000 UTC m=+1224.124783216" observedRunningTime="2025-12-06 07:17:42.474709023 +0000 UTC m=+1224.876097903" watchObservedRunningTime="2025-12-06 07:17:42.479519174 +0000 UTC m=+1224.880908054" Dec 06 07:17:42 crc kubenswrapper[4895]: I1206 07:17:42.808548 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rgp22"] Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.615954 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6bvww"] Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.616908 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.674906 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6bvww"] Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.744176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjcqr\" (UniqueName: \"kubernetes.io/projected/c9cd8e78-2424-4824-a38d-8bf32c3c1fb3-kube-api-access-cjcqr\") pod \"openstack-operator-index-6bvww\" (UID: \"c9cd8e78-2424-4824-a38d-8bf32c3c1fb3\") " pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.846162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjcqr\" (UniqueName: \"kubernetes.io/projected/c9cd8e78-2424-4824-a38d-8bf32c3c1fb3-kube-api-access-cjcqr\") pod \"openstack-operator-index-6bvww\" (UID: \"c9cd8e78-2424-4824-a38d-8bf32c3c1fb3\") " pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.912857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjcqr\" (UniqueName: \"kubernetes.io/projected/c9cd8e78-2424-4824-a38d-8bf32c3c1fb3-kube-api-access-cjcqr\") pod \"openstack-operator-index-6bvww\" (UID: \"c9cd8e78-2424-4824-a38d-8bf32c3c1fb3\") " pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:43 crc kubenswrapper[4895]: I1206 07:17:43.953884 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.188341 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6bvww"] Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.468104 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6bvww" event={"ID":"c9cd8e78-2424-4824-a38d-8bf32c3c1fb3","Type":"ContainerStarted","Data":"c153c9b152d912d6c73f303b54d448c8e8c6c972e7fb204e04509d3a6e7d5666"} Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.468153 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rgp22" podUID="541e80b1-619d-4cab-93a7-e46dfe21a70e" containerName="registry-server" containerID="cri-o://931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318" gracePeriod=2 Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.841306 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.866003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x7n9\" (UniqueName: \"kubernetes.io/projected/541e80b1-619d-4cab-93a7-e46dfe21a70e-kube-api-access-4x7n9\") pod \"541e80b1-619d-4cab-93a7-e46dfe21a70e\" (UID: \"541e80b1-619d-4cab-93a7-e46dfe21a70e\") " Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.873025 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541e80b1-619d-4cab-93a7-e46dfe21a70e-kube-api-access-4x7n9" (OuterVolumeSpecName: "kube-api-access-4x7n9") pod "541e80b1-619d-4cab-93a7-e46dfe21a70e" (UID: "541e80b1-619d-4cab-93a7-e46dfe21a70e"). InnerVolumeSpecName "kube-api-access-4x7n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:17:44 crc kubenswrapper[4895]: I1206 07:17:44.966849 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x7n9\" (UniqueName: \"kubernetes.io/projected/541e80b1-619d-4cab-93a7-e46dfe21a70e-kube-api-access-4x7n9\") on node \"crc\" DevicePath \"\"" Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.478807 4895 generic.go:334] "Generic (PLEG): container finished" podID="541e80b1-619d-4cab-93a7-e46dfe21a70e" containerID="931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318" exitCode=0 Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.478910 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rgp22" Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.478915 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rgp22" event={"ID":"541e80b1-619d-4cab-93a7-e46dfe21a70e","Type":"ContainerDied","Data":"931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318"} Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.479085 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rgp22" event={"ID":"541e80b1-619d-4cab-93a7-e46dfe21a70e","Type":"ContainerDied","Data":"fd399496a4fcc0449358dc093c2d8231639cdf442e1bc5206977ac7d1332ddfb"} Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.479144 4895 scope.go:117] "RemoveContainer" containerID="931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318" Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.481656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6bvww" event={"ID":"c9cd8e78-2424-4824-a38d-8bf32c3c1fb3","Type":"ContainerStarted","Data":"f5604ef4ea89aac4532c2adf4c042352e2566dd6767d0f4923ba7ab59fce22ca"} Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.502901 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6bvww" podStartSLOduration=1.8364108959999998 podStartE2EDuration="2.502874923s" podCreationTimestamp="2025-12-06 07:17:43 +0000 UTC" firstStartedPulling="2025-12-06 07:17:44.202712588 +0000 UTC m=+1226.604101458" lastFinishedPulling="2025-12-06 07:17:44.869176605 +0000 UTC m=+1227.270565485" observedRunningTime="2025-12-06 07:17:45.497890039 +0000 UTC m=+1227.899278929" watchObservedRunningTime="2025-12-06 07:17:45.502874923 +0000 UTC m=+1227.904263793" Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.524257 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rgp22"] Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.524988 4895 scope.go:117] "RemoveContainer" containerID="931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318" Dec 06 07:17:45 crc kubenswrapper[4895]: E1206 07:17:45.525685 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318\": container with ID starting with 931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318 not found: ID does not exist" containerID="931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318" Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.525733 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318"} err="failed to get container status \"931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318\": rpc error: code = NotFound desc = could not find container \"931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318\": container with ID starting with 931400c119acd9305d0aeb5d23c57a973b98a7ca4b9b4638f6ae4d4c01039318 not found: ID does not exist" Dec 06 07:17:45 crc kubenswrapper[4895]: I1206 07:17:45.528649 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rgp22"] Dec 06 07:17:46 crc kubenswrapper[4895]: I1206 07:17:46.067802 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541e80b1-619d-4cab-93a7-e46dfe21a70e" path="/var/lib/kubelet/pods/541e80b1-619d-4cab-93a7-e46dfe21a70e/volumes" Dec 06 07:17:53 crc kubenswrapper[4895]: I1206 07:17:53.954325 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:53 crc kubenswrapper[4895]: I1206 07:17:53.954925 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:54 crc kubenswrapper[4895]: I1206 07:17:54.010882 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:54 crc kubenswrapper[4895]: I1206 07:17:54.568091 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6bvww" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.872460 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls"] Dec 06 07:17:55 crc kubenswrapper[4895]: E1206 07:17:55.883136 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541e80b1-619d-4cab-93a7-e46dfe21a70e" containerName="registry-server" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.883187 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="541e80b1-619d-4cab-93a7-e46dfe21a70e" containerName="registry-server" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.883905 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="541e80b1-619d-4cab-93a7-e46dfe21a70e" containerName="registry-server" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.886583 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls"] Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.886720 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.890246 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lcwf7" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.915674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frc78\" (UniqueName: \"kubernetes.io/projected/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-kube-api-access-frc78\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.915731 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:55 crc kubenswrapper[4895]: I1206 07:17:55.915871 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.017089 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frc78\" (UniqueName: \"kubernetes.io/projected/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-kube-api-access-frc78\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.017175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.017208 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.018014 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.018218 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.037842 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frc78\" (UniqueName: \"kubernetes.io/projected/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-kube-api-access-frc78\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.209466 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:17:56 crc kubenswrapper[4895]: I1206 07:17:56.680843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls"] Dec 06 07:17:56 crc kubenswrapper[4895]: W1206 07:17:56.685670 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dbcdcb9_6ac1_4996_9a4c_d03743ee0c13.slice/crio-b62e7d07f1d07709ebe6ab46dd9545ba43894e0a5053d23dce7b66326b92c4b7 WatchSource:0}: Error finding container b62e7d07f1d07709ebe6ab46dd9545ba43894e0a5053d23dce7b66326b92c4b7: Status 404 returned error can't find the container with id b62e7d07f1d07709ebe6ab46dd9545ba43894e0a5053d23dce7b66326b92c4b7 Dec 06 07:17:57 crc kubenswrapper[4895]: I1206 07:17:57.560701 4895 generic.go:334] "Generic (PLEG): container finished" podID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerID="4ba3772f4c5711af195742b6ebf28595bf698c30be8cf1892745029c4c078b81" exitCode=0 Dec 06 07:17:57 crc kubenswrapper[4895]: I1206 07:17:57.560754 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" event={"ID":"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13","Type":"ContainerDied","Data":"4ba3772f4c5711af195742b6ebf28595bf698c30be8cf1892745029c4c078b81"} Dec 06 07:17:57 crc kubenswrapper[4895]: I1206 07:17:57.560784 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" event={"ID":"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13","Type":"ContainerStarted","Data":"b62e7d07f1d07709ebe6ab46dd9545ba43894e0a5053d23dce7b66326b92c4b7"} Dec 06 07:17:58 crc kubenswrapper[4895]: I1206 07:17:58.568074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" event={"ID":"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13","Type":"ContainerStarted","Data":"692aff11060de4aefd6fbd034a9256798040237ba7a593279647d165165fc472"} Dec 06 07:17:59 crc kubenswrapper[4895]: I1206 07:17:59.575828 4895 generic.go:334] "Generic (PLEG): container finished" podID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerID="692aff11060de4aefd6fbd034a9256798040237ba7a593279647d165165fc472" exitCode=0 Dec 06 07:17:59 crc kubenswrapper[4895]: I1206 07:17:59.575864 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" event={"ID":"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13","Type":"ContainerDied","Data":"692aff11060de4aefd6fbd034a9256798040237ba7a593279647d165165fc472"} Dec 06 07:18:00 crc kubenswrapper[4895]: I1206 07:18:00.587749 4895 generic.go:334] "Generic (PLEG): container finished" podID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerID="1c6d869f621acd8e62c7168682b25b2515d87a2b9653144c4ce5e08fc3854f06" exitCode=0 Dec 06 07:18:00 crc kubenswrapper[4895]: I1206 07:18:00.587817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" event={"ID":"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13","Type":"ContainerDied","Data":"1c6d869f621acd8e62c7168682b25b2515d87a2b9653144c4ce5e08fc3854f06"} Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.169247 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.277222 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frc78\" (UniqueName: \"kubernetes.io/projected/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-kube-api-access-frc78\") pod \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.277330 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-bundle\") pod \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.277356 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-util\") pod \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\" (UID: \"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13\") " Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.278274 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-bundle" (OuterVolumeSpecName: "bundle") pod "8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" (UID: "8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.285745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-kube-api-access-frc78" (OuterVolumeSpecName: "kube-api-access-frc78") pod "8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" (UID: "8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13"). InnerVolumeSpecName "kube-api-access-frc78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.338777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-util" (OuterVolumeSpecName: "util") pod "8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" (UID: "8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.378598 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frc78\" (UniqueName: \"kubernetes.io/projected/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-kube-api-access-frc78\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.378639 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.378648 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13-util\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.607248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" event={"ID":"8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13","Type":"ContainerDied","Data":"b62e7d07f1d07709ebe6ab46dd9545ba43894e0a5053d23dce7b66326b92c4b7"} Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.607296 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62e7d07f1d07709ebe6ab46dd9545ba43894e0a5053d23dce7b66326b92c4b7" Dec 06 07:18:02 crc kubenswrapper[4895]: I1206 07:18:02.607317 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.033332 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz"] Dec 06 07:18:07 crc kubenswrapper[4895]: E1206 07:18:07.033790 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="util" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.033811 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="util" Dec 06 07:18:07 crc kubenswrapper[4895]: E1206 07:18:07.033825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="pull" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.033833 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="pull" Dec 06 07:18:07 crc kubenswrapper[4895]: E1206 07:18:07.033868 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="extract" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.033877 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="extract" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.034005 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13" containerName="extract" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.034708 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.036773 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-545z6" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.062066 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz"] Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.141963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw65f\" (UniqueName: \"kubernetes.io/projected/e996e45e-f1d8-41c8-9133-b0189b0025fc-kube-api-access-rw65f\") pod \"openstack-operator-controller-operator-55b6fb9447-lclnz\" (UID: \"e996e45e-f1d8-41c8-9133-b0189b0025fc\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.244220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw65f\" (UniqueName: \"kubernetes.io/projected/e996e45e-f1d8-41c8-9133-b0189b0025fc-kube-api-access-rw65f\") pod \"openstack-operator-controller-operator-55b6fb9447-lclnz\" (UID: \"e996e45e-f1d8-41c8-9133-b0189b0025fc\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.261897 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw65f\" (UniqueName: \"kubernetes.io/projected/e996e45e-f1d8-41c8-9133-b0189b0025fc-kube-api-access-rw65f\") pod \"openstack-operator-controller-operator-55b6fb9447-lclnz\" (UID: \"e996e45e-f1d8-41c8-9133-b0189b0025fc\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:07 crc kubenswrapper[4895]: I1206 07:18:07.360362 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:08 crc kubenswrapper[4895]: I1206 07:18:08.122300 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz"] Dec 06 07:18:08 crc kubenswrapper[4895]: I1206 07:18:08.871600 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" event={"ID":"e996e45e-f1d8-41c8-9133-b0189b0025fc","Type":"ContainerStarted","Data":"4a315198ba7287d0a4ee7c97d69a1e7502410a9a54ad89625ed2f30a532d6069"} Dec 06 07:18:16 crc kubenswrapper[4895]: I1206 07:18:16.923383 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" event={"ID":"e996e45e-f1d8-41c8-9133-b0189b0025fc","Type":"ContainerStarted","Data":"54424d1c94b2a29d4f7722048e8dbf027e569f1be57595d244003b563f130761"} Dec 06 07:18:16 crc kubenswrapper[4895]: I1206 07:18:16.924034 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:16 crc kubenswrapper[4895]: I1206 07:18:16.953763 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" podStartSLOduration=3.544729751 podStartE2EDuration="10.953743891s" podCreationTimestamp="2025-12-06 07:18:06 +0000 UTC" firstStartedPulling="2025-12-06 07:18:08.417077981 +0000 UTC m=+1250.818466851" lastFinishedPulling="2025-12-06 07:18:15.826092121 +0000 UTC m=+1258.227480991" observedRunningTime="2025-12-06 07:18:16.949051903 +0000 UTC m=+1259.350440783" watchObservedRunningTime="2025-12-06 07:18:16.953743891 +0000 UTC m=+1259.355132761" Dec 06 07:18:27 crc kubenswrapper[4895]: I1206 07:18:27.363059 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-lclnz" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.534747 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.536666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.538659 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9t5zx" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.542630 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.543863 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:18:45 crc kubenswrapper[4895]: W1206 07:18:45.546203 4895 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wbjv2": failed to list *v1.Secret: secrets "cinder-operator-controller-manager-dockercfg-wbjv2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Dec 06 07:18:45 crc kubenswrapper[4895]: E1206 07:18:45.546249 4895 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-wbjv2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-operator-controller-manager-dockercfg-wbjv2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.566386 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.567841 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.569974 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.578562 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jqqpc" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.588624 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.597411 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.619863 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.621197 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.623544 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cjj8z" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.632918 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.649374 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.651050 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.653875 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9k8kn" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.661254 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzmt\" (UniqueName: \"kubernetes.io/projected/e8a69b24-8304-4447-b76f-e98e93cb7715-kube-api-access-xmzmt\") pod \"glance-operator-controller-manager-77987cd8cd-4p4x6\" (UID: \"e8a69b24-8304-4447-b76f-e98e93cb7715\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.661314 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4qm\" (UniqueName: \"kubernetes.io/projected/5417e33f-dead-459e-933b-58ad3ae2da48-kube-api-access-9f4qm\") pod \"cinder-operator-controller-manager-859b6ccc6-qk4xg\" (UID: \"5417e33f-dead-459e-933b-58ad3ae2da48\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.661396 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbgr\" (UniqueName: \"kubernetes.io/projected/f509e9a0-673f-45ba-a4f5-f3f5834ac86a-kube-api-access-njbgr\") pod \"designate-operator-controller-manager-78b4bc895b-5d9nk\" (UID: \"f509e9a0-673f-45ba-a4f5-f3f5834ac86a\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.661453 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhw28\" (UniqueName: \"kubernetes.io/projected/109a952b-18eb-4217-884d-f40b3be18878-kube-api-access-xhw28\") pod \"barbican-operator-controller-manager-7d9dfd778-bc6fp\" (UID: \"109a952b-18eb-4217-884d-f40b3be18878\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.661632 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.668675 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.668841 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.672309 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rb6t6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.714549 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.746259 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.751052 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.755359 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-rgbzn" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.758545 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-6cv59"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.761201 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.762987 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbgr\" (UniqueName: \"kubernetes.io/projected/f509e9a0-673f-45ba-a4f5-f3f5834ac86a-kube-api-access-njbgr\") pod \"designate-operator-controller-manager-78b4bc895b-5d9nk\" (UID: \"f509e9a0-673f-45ba-a4f5-f3f5834ac86a\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.763212 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhw28\" (UniqueName: \"kubernetes.io/projected/109a952b-18eb-4217-884d-f40b3be18878-kube-api-access-xhw28\") pod \"barbican-operator-controller-manager-7d9dfd778-bc6fp\" (UID: \"109a952b-18eb-4217-884d-f40b3be18878\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.763416 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wqx\" (UniqueName: \"kubernetes.io/projected/e46a2036-66cd-420c-9920-a3e8ef0e17df-kube-api-access-h2wqx\") pod \"horizon-operator-controller-manager-68c6d99b8f-rt4b2\" (UID: \"e46a2036-66cd-420c-9920-a3e8ef0e17df\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.763527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzmt\" (UniqueName: \"kubernetes.io/projected/e8a69b24-8304-4447-b76f-e98e93cb7715-kube-api-access-xmzmt\") pod \"glance-operator-controller-manager-77987cd8cd-4p4x6\" (UID: \"e8a69b24-8304-4447-b76f-e98e93cb7715\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.763628 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4qm\" (UniqueName: \"kubernetes.io/projected/5417e33f-dead-459e-933b-58ad3ae2da48-kube-api-access-9f4qm\") pod \"cinder-operator-controller-manager-859b6ccc6-qk4xg\" (UID: \"5417e33f-dead-459e-933b-58ad3ae2da48\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.763722 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks5pf\" (UniqueName: \"kubernetes.io/projected/abcee2d9-1cac-4e62-88a6-79b249832e9b-kube-api-access-ks5pf\") pod \"heat-operator-controller-manager-5f64f6f8bb-8m9qs\" (UID: \"abcee2d9-1cac-4e62-88a6-79b249832e9b\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.773148 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c5bkz" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.773364 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.776800 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.781234 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.783859 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qnxd4" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.803043 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.804581 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzmt\" (UniqueName: \"kubernetes.io/projected/e8a69b24-8304-4447-b76f-e98e93cb7715-kube-api-access-xmzmt\") pod \"glance-operator-controller-manager-77987cd8cd-4p4x6\" (UID: \"e8a69b24-8304-4447-b76f-e98e93cb7715\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.806085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhw28\" (UniqueName: \"kubernetes.io/projected/109a952b-18eb-4217-884d-f40b3be18878-kube-api-access-xhw28\") pod \"barbican-operator-controller-manager-7d9dfd778-bc6fp\" (UID: \"109a952b-18eb-4217-884d-f40b3be18878\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.811102 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-6cv59"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.820741 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbgr\" (UniqueName: \"kubernetes.io/projected/f509e9a0-673f-45ba-a4f5-f3f5834ac86a-kube-api-access-njbgr\") pod \"designate-operator-controller-manager-78b4bc895b-5d9nk\" (UID: \"f509e9a0-673f-45ba-a4f5-f3f5834ac86a\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.821178 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4qm\" (UniqueName: \"kubernetes.io/projected/5417e33f-dead-459e-933b-58ad3ae2da48-kube-api-access-9f4qm\") pod \"cinder-operator-controller-manager-859b6ccc6-qk4xg\" (UID: \"5417e33f-dead-459e-933b-58ad3ae2da48\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.839570 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.855550 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.856965 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.859335 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8j8fs" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.865057 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.865979 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9bb\" (UniqueName: \"kubernetes.io/projected/68495243-fc02-458e-af78-61702a2dda83-kube-api-access-gr9bb\") pod \"ironic-operator-controller-manager-6c548fd776-f6j2r\" (UID: \"68495243-fc02-458e-af78-61702a2dda83\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.866020 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.866045 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qrc\" (UniqueName: \"kubernetes.io/projected/8e9001cb-7a62-4617-8143-f4a51ad1c13a-kube-api-access-l5qrc\") pod \"keystone-operator-controller-manager-7765d96ddf-d65l6\" (UID: \"8e9001cb-7a62-4617-8143-f4a51ad1c13a\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.866111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2wqx\" (UniqueName: \"kubernetes.io/projected/e46a2036-66cd-420c-9920-a3e8ef0e17df-kube-api-access-h2wqx\") pod \"horizon-operator-controller-manager-68c6d99b8f-rt4b2\" (UID: \"e46a2036-66cd-420c-9920-a3e8ef0e17df\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.866149 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks5pf\" (UniqueName: \"kubernetes.io/projected/abcee2d9-1cac-4e62-88a6-79b249832e9b-kube-api-access-ks5pf\") pod \"heat-operator-controller-manager-5f64f6f8bb-8m9qs\" (UID: \"abcee2d9-1cac-4e62-88a6-79b249832e9b\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.866170 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8klrm\" (UniqueName: \"kubernetes.io/projected/512638f7-8e17-493b-a34b-3da3c65f445a-kube-api-access-8klrm\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.883498 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.902673 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.915572 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.920693 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks5pf\" (UniqueName: \"kubernetes.io/projected/abcee2d9-1cac-4e62-88a6-79b249832e9b-kube-api-access-ks5pf\") pod \"heat-operator-controller-manager-5f64f6f8bb-8m9qs\" (UID: \"abcee2d9-1cac-4e62-88a6-79b249832e9b\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.926995 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.927232 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.934042 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2wqx\" (UniqueName: \"kubernetes.io/projected/e46a2036-66cd-420c-9920-a3e8ef0e17df-kube-api-access-h2wqx\") pod \"horizon-operator-controller-manager-68c6d99b8f-rt4b2\" (UID: \"e46a2036-66cd-420c-9920-a3e8ef0e17df\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.940915 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pwjph" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.941143 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.943299 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.945799 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.948787 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-t49k5" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.967235 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsb4\" (UniqueName: \"kubernetes.io/projected/d61c0a11-5736-4747-889a-6dd520cbe269-kube-api-access-8qsb4\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2lbh2\" (UID: \"d61c0a11-5736-4747-889a-6dd520cbe269\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.967569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8klrm\" (UniqueName: \"kubernetes.io/projected/512638f7-8e17-493b-a34b-3da3c65f445a-kube-api-access-8klrm\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.967672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z98fd\" (UniqueName: \"kubernetes.io/projected/0d925507-837e-438e-8f19-34c15b8b208e-kube-api-access-z98fd\") pod \"manila-operator-controller-manager-7c79b5df47-cgv2z\" (UID: \"0d925507-837e-438e-8f19-34c15b8b208e\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.967776 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pstk\" (UniqueName: \"kubernetes.io/projected/d11ece89-3325-4b95-aac8-776e2eaffecb-kube-api-access-5pstk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9r6m4\" (UID: \"d11ece89-3325-4b95-aac8-776e2eaffecb\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.967859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr9bb\" (UniqueName: \"kubernetes.io/projected/68495243-fc02-458e-af78-61702a2dda83-kube-api-access-gr9bb\") pod \"ironic-operator-controller-manager-6c548fd776-f6j2r\" (UID: \"68495243-fc02-458e-af78-61702a2dda83\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.968340 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.968417 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qrc\" (UniqueName: \"kubernetes.io/projected/8e9001cb-7a62-4617-8143-f4a51ad1c13a-kube-api-access-l5qrc\") pod \"keystone-operator-controller-manager-7765d96ddf-d65l6\" (UID: \"8e9001cb-7a62-4617-8143-f4a51ad1c13a\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:18:45 crc kubenswrapper[4895]: E1206 07:18:45.969430 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.969549 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4"] Dec 06 07:18:45 crc kubenswrapper[4895]: E1206 07:18:45.969574 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert podName:512638f7-8e17-493b-a34b-3da3c65f445a nodeName:}" failed. No retries permitted until 2025-12-06 07:18:46.469555738 +0000 UTC m=+1288.870944608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert") pod "infra-operator-controller-manager-57548d458d-6cv59" (UID: "512638f7-8e17-493b-a34b-3da3c65f445a") : secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.976494 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.992836 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g"] Dec 06 07:18:45 crc kubenswrapper[4895]: I1206 07:18:45.994115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.000565 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.001814 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.009527 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q8jcf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.009781 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-g9lh9" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.011112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qrc\" (UniqueName: \"kubernetes.io/projected/8e9001cb-7a62-4617-8143-f4a51ad1c13a-kube-api-access-l5qrc\") pod \"keystone-operator-controller-manager-7765d96ddf-d65l6\" (UID: \"8e9001cb-7a62-4617-8143-f4a51ad1c13a\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.014854 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.021196 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.025314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8klrm\" (UniqueName: \"kubernetes.io/projected/512638f7-8e17-493b-a34b-3da3c65f445a-kube-api-access-8klrm\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.037647 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.044966 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr9bb\" (UniqueName: \"kubernetes.io/projected/68495243-fc02-458e-af78-61702a2dda83-kube-api-access-gr9bb\") pod \"ironic-operator-controller-manager-6c548fd776-f6j2r\" (UID: \"68495243-fc02-458e-af78-61702a2dda83\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.070636 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.071784 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pstk\" (UniqueName: \"kubernetes.io/projected/d11ece89-3325-4b95-aac8-776e2eaffecb-kube-api-access-5pstk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9r6m4\" (UID: \"d11ece89-3325-4b95-aac8-776e2eaffecb\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.071851 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s44\" (UniqueName: \"kubernetes.io/projected/05bfd83c-3643-4dc8-bd25-2204bc8bc8f6-kube-api-access-58s44\") pod \"nova-operator-controller-manager-697bc559fc-dmspk\" (UID: \"05bfd83c-3643-4dc8-bd25-2204bc8bc8f6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.071890 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.071944 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qsb4\" (UniqueName: \"kubernetes.io/projected/d61c0a11-5736-4747-889a-6dd520cbe269-kube-api-access-8qsb4\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2lbh2\" (UID: \"d61c0a11-5736-4747-889a-6dd520cbe269\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.072014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tjnq\" (UniqueName: \"kubernetes.io/projected/88ab7b06-3be3-44a1-acbf-8ba5ced20251-kube-api-access-9tjnq\") pod \"octavia-operator-controller-manager-998648c74-tbd9g\" (UID: \"88ab7b06-3be3-44a1-acbf-8ba5ced20251\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.072038 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z98fd\" (UniqueName: \"kubernetes.io/projected/0d925507-837e-438e-8f19-34c15b8b208e-kube-api-access-z98fd\") pod \"manila-operator-controller-manager-7c79b5df47-cgv2z\" (UID: \"0d925507-837e-438e-8f19-34c15b8b208e\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.072770 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.072809 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.102702 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xdgrk" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.102872 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zsgw5" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.116166 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qsb4\" (UniqueName: \"kubernetes.io/projected/d61c0a11-5736-4747-889a-6dd520cbe269-kube-api-access-8qsb4\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2lbh2\" (UID: \"d61c0a11-5736-4747-889a-6dd520cbe269\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.118845 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.139824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pstk\" (UniqueName: \"kubernetes.io/projected/d11ece89-3325-4b95-aac8-776e2eaffecb-kube-api-access-5pstk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9r6m4\" (UID: \"d11ece89-3325-4b95-aac8-776e2eaffecb\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.144920 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z98fd\" (UniqueName: \"kubernetes.io/projected/0d925507-837e-438e-8f19-34c15b8b208e-kube-api-access-z98fd\") pod \"manila-operator-controller-manager-7c79b5df47-cgv2z\" (UID: \"0d925507-837e-438e-8f19-34c15b8b208e\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.156582 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.158366 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.165624 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.165623 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-djdm7" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.174013 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tjnq\" (UniqueName: \"kubernetes.io/projected/88ab7b06-3be3-44a1-acbf-8ba5ced20251-kube-api-access-9tjnq\") pod \"octavia-operator-controller-manager-998648c74-tbd9g\" (UID: \"88ab7b06-3be3-44a1-acbf-8ba5ced20251\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.174094 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhrgr\" (UniqueName: \"kubernetes.io/projected/98110cff-712b-414c-9965-14d895f4b99f-kube-api-access-zhrgr\") pod \"placement-operator-controller-manager-78f8948974-gv8vt\" (UID: \"98110cff-712b-414c-9965-14d895f4b99f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.174148 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58s44\" (UniqueName: \"kubernetes.io/projected/05bfd83c-3643-4dc8-bd25-2204bc8bc8f6-kube-api-access-58s44\") pod \"nova-operator-controller-manager-697bc559fc-dmspk\" (UID: \"05bfd83c-3643-4dc8-bd25-2204bc8bc8f6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.174193 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjpd\" (UniqueName: \"kubernetes.io/projected/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-kube-api-access-pdjpd\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.174273 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfvn\" (UniqueName: \"kubernetes.io/projected/c08a88ee-75c4-450b-8cc0-6159127f6a8c-kube-api-access-6wfvn\") pod \"ovn-operator-controller-manager-b6456fdb6-dk54h\" (UID: \"c08a88ee-75c4-450b-8cc0-6159127f6a8c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.174304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.176367 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.181914 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.207945 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.208220 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.261821 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58s44\" (UniqueName: \"kubernetes.io/projected/05bfd83c-3643-4dc8-bd25-2204bc8bc8f6-kube-api-access-58s44\") pod \"nova-operator-controller-manager-697bc559fc-dmspk\" (UID: \"05bfd83c-3643-4dc8-bd25-2204bc8bc8f6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.263044 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.273878 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tjnq\" (UniqueName: \"kubernetes.io/projected/88ab7b06-3be3-44a1-acbf-8ba5ced20251-kube-api-access-9tjnq\") pod \"octavia-operator-controller-manager-998648c74-tbd9g\" (UID: \"88ab7b06-3be3-44a1-acbf-8ba5ced20251\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.275371 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.275489 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhrgr\" (UniqueName: \"kubernetes.io/projected/98110cff-712b-414c-9965-14d895f4b99f-kube-api-access-zhrgr\") pod \"placement-operator-controller-manager-78f8948974-gv8vt\" (UID: \"98110cff-712b-414c-9965-14d895f4b99f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.275606 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjpd\" (UniqueName: \"kubernetes.io/projected/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-kube-api-access-pdjpd\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.275697 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wfvn\" (UniqueName: \"kubernetes.io/projected/c08a88ee-75c4-450b-8cc0-6159127f6a8c-kube-api-access-6wfvn\") pod \"ovn-operator-controller-manager-b6456fdb6-dk54h\" (UID: \"c08a88ee-75c4-450b-8cc0-6159127f6a8c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:18:46 crc kubenswrapper[4895]: E1206 07:18:46.276412 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:46 crc kubenswrapper[4895]: E1206 07:18:46.285548 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert podName:a8ffcb7e-0e4b-42c3-b778-4706cbd59792 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:46.776459907 +0000 UTC m=+1289.177848777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f522sgf" (UID: "a8ffcb7e-0e4b-42c3-b778-4706cbd59792") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.367108 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.385696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhrgr\" (UniqueName: \"kubernetes.io/projected/98110cff-712b-414c-9965-14d895f4b99f-kube-api-access-zhrgr\") pod \"placement-operator-controller-manager-78f8948974-gv8vt\" (UID: \"98110cff-712b-414c-9965-14d895f4b99f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.399061 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjpd\" (UniqueName: \"kubernetes.io/projected/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-kube-api-access-pdjpd\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.413507 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.444150 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wfvn\" (UniqueName: \"kubernetes.io/projected/c08a88ee-75c4-450b-8cc0-6159127f6a8c-kube-api-access-6wfvn\") pod \"ovn-operator-controller-manager-b6456fdb6-dk54h\" (UID: \"c08a88ee-75c4-450b-8cc0-6159127f6a8c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.458581 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.507368 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.509830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:46 crc kubenswrapper[4895]: E1206 07:18:46.510065 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:46 crc kubenswrapper[4895]: E1206 07:18:46.519030 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert podName:512638f7-8e17-493b-a34b-3da3c65f445a nodeName:}" failed. No retries permitted until 2025-12-06 07:18:47.51891309 +0000 UTC m=+1289.920301960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert") pod "infra-operator-controller-manager-57548d458d-6cv59" (UID: "512638f7-8e17-493b-a34b-3da3c65f445a") : secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.536055 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.548766 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.551677 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.575835 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.584961 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qzpq7" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.594142 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.599218 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-n98g7" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.611497 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.694978 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.700130 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.721640 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-rph9c"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.723425 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.726720 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v2p6\" (UniqueName: \"kubernetes.io/projected/fa381d85-af76-4af0-a49a-1722c746f7c2-kube-api-access-2v2p6\") pod \"swift-operator-controller-manager-5f8c65bbfc-8tr76\" (UID: \"fa381d85-af76-4af0-a49a-1722c746f7c2\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.726817 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbp8\" (UniqueName: \"kubernetes.io/projected/3e2fc835-9cf2-4e21-a1fc-d76cfafba632-kube-api-access-wrbp8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-dq42h\" (UID: \"3e2fc835-9cf2-4e21-a1fc-d76cfafba632\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.728154 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-rph9c"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.729894 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-599kd" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.763646 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.764936 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.769807 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c57tg" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.781091 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.827639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58wb\" (UniqueName: \"kubernetes.io/projected/49bde36b-bda3-4622-87b2-6df2a2bee7f7-kube-api-access-x58wb\") pod \"watcher-operator-controller-manager-769dc69bc-gzb86\" (UID: \"49bde36b-bda3-4622-87b2-6df2a2bee7f7\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.828082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v2p6\" (UniqueName: \"kubernetes.io/projected/fa381d85-af76-4af0-a49a-1722c746f7c2-kube-api-access-2v2p6\") pod \"swift-operator-controller-manager-5f8c65bbfc-8tr76\" (UID: \"fa381d85-af76-4af0-a49a-1722c746f7c2\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.829890 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr4nz\" (UniqueName: \"kubernetes.io/projected/5fb63748-1c10-4a17-9dcc-862fc1b29b46-kube-api-access-tr4nz\") pod \"test-operator-controller-manager-5854674fcc-rph9c\" (UID: \"5fb63748-1c10-4a17-9dcc-862fc1b29b46\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.830226 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrbp8\" (UniqueName: \"kubernetes.io/projected/3e2fc835-9cf2-4e21-a1fc-d76cfafba632-kube-api-access-wrbp8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-dq42h\" (UID: \"3e2fc835-9cf2-4e21-a1fc-d76cfafba632\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.830408 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:46 crc kubenswrapper[4895]: E1206 07:18:46.830695 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:46 crc kubenswrapper[4895]: E1206 07:18:46.830758 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert podName:a8ffcb7e-0e4b-42c3-b778-4706cbd59792 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:47.830737903 +0000 UTC m=+1290.232126783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f522sgf" (UID: "a8ffcb7e-0e4b-42c3-b778-4706cbd59792") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.842313 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.843409 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.846689 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.846747 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-68mj2" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.846913 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.854937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrbp8\" (UniqueName: \"kubernetes.io/projected/3e2fc835-9cf2-4e21-a1fc-d76cfafba632-kube-api-access-wrbp8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-dq42h\" (UID: \"3e2fc835-9cf2-4e21-a1fc-d76cfafba632\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.855159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v2p6\" (UniqueName: \"kubernetes.io/projected/fa381d85-af76-4af0-a49a-1722c746f7c2-kube-api-access-2v2p6\") pod \"swift-operator-controller-manager-5f8c65bbfc-8tr76\" (UID: \"fa381d85-af76-4af0-a49a-1722c746f7c2\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.858972 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.876879 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.878648 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.881465 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.881577 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.881943 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp"] Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.881961 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-twc5c" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.937085 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr4nz\" (UniqueName: \"kubernetes.io/projected/5fb63748-1c10-4a17-9dcc-862fc1b29b46-kube-api-access-tr4nz\") pod \"test-operator-controller-manager-5854674fcc-rph9c\" (UID: \"5fb63748-1c10-4a17-9dcc-862fc1b29b46\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.937165 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj6wr\" (UniqueName: \"kubernetes.io/projected/d5cafedb-1052-4cd3-9212-3f642e07c18d-kube-api-access-qj6wr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czmdp\" (UID: \"d5cafedb-1052-4cd3-9212-3f642e07c18d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.937345 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.937462 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58wb\" (UniqueName: \"kubernetes.io/projected/49bde36b-bda3-4622-87b2-6df2a2bee7f7-kube-api-access-x58wb\") pod \"watcher-operator-controller-manager-769dc69bc-gzb86\" (UID: \"49bde36b-bda3-4622-87b2-6df2a2bee7f7\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.937563 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.937592 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2zdf\" (UniqueName: \"kubernetes.io/projected/14ae4729-3f50-4990-9b10-8a06e7e78060-kube-api-access-l2zdf\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.952715 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wbjv2" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.964314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr4nz\" (UniqueName: \"kubernetes.io/projected/5fb63748-1c10-4a17-9dcc-862fc1b29b46-kube-api-access-tr4nz\") pod \"test-operator-controller-manager-5854674fcc-rph9c\" (UID: \"5fb63748-1c10-4a17-9dcc-862fc1b29b46\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:18:46 crc kubenswrapper[4895]: I1206 07:18:46.971864 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58wb\" (UniqueName: \"kubernetes.io/projected/49bde36b-bda3-4622-87b2-6df2a2bee7f7-kube-api-access-x58wb\") pod \"watcher-operator-controller-manager-769dc69bc-gzb86\" (UID: \"49bde36b-bda3-4622-87b2-6df2a2bee7f7\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.038850 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.038988 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.039025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2zdf\" (UniqueName: \"kubernetes.io/projected/14ae4729-3f50-4990-9b10-8a06e7e78060-kube-api-access-l2zdf\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.039108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj6wr\" (UniqueName: \"kubernetes.io/projected/d5cafedb-1052-4cd3-9212-3f642e07c18d-kube-api-access-qj6wr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czmdp\" (UID: \"d5cafedb-1052-4cd3-9212-3f642e07c18d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.039433 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.039539 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:47.539515953 +0000 UTC m=+1289.940904893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "metrics-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.039704 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.039733 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:47.539724658 +0000 UTC m=+1289.941113628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.067815 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj6wr\" (UniqueName: \"kubernetes.io/projected/d5cafedb-1052-4cd3-9212-3f642e07c18d-kube-api-access-qj6wr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czmdp\" (UID: \"d5cafedb-1052-4cd3-9212-3f642e07c18d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.075628 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2zdf\" (UniqueName: \"kubernetes.io/projected/14ae4729-3f50-4990-9b10-8a06e7e78060-kube-api-access-l2zdf\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.076314 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.097008 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.121224 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.149444 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.229368 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.540518 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk"] Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.557857 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.557938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.557989 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.558121 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.558167 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert podName:512638f7-8e17-493b-a34b-3da3c65f445a nodeName:}" failed. No retries permitted until 2025-12-06 07:18:49.558153243 +0000 UTC m=+1291.959542113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert") pod "infra-operator-controller-manager-57548d458d-6cv59" (UID: "512638f7-8e17-493b-a34b-3da3c65f445a") : secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.558555 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.558585 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:48.558577694 +0000 UTC m=+1290.959966564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.558663 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.558741 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:48.558722908 +0000 UTC m=+1290.960111778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "metrics-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.562660 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2"] Dec 06 07:18:47 crc kubenswrapper[4895]: I1206 07:18:47.878634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.879152 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:47 crc kubenswrapper[4895]: E1206 07:18:47.879210 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert podName:a8ffcb7e-0e4b-42c3-b778-4706cbd59792 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:49.879192336 +0000 UTC m=+1292.280581206 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f522sgf" (UID: "a8ffcb7e-0e4b-42c3-b778-4706cbd59792") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.027278 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.034186 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk"] Dec 06 07:18:48 crc kubenswrapper[4895]: W1206 07:18:48.117149 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d925507_837e_438e_8f19_34c15b8b208e.slice/crio-736031cff3743d47e9f1836448b9fe68fd1449177dcc82902e655e1cbea08cc5 WatchSource:0}: Error finding container 736031cff3743d47e9f1836448b9fe68fd1449177dcc82902e655e1cbea08cc5: Status 404 returned error can't find the container with id 736031cff3743d47e9f1836448b9fe68fd1449177dcc82902e655e1cbea08cc5 Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.157154 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.157238 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.157322 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.208520 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.215008 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.225867 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.228317 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.255844 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" event={"ID":"0d925507-837e-438e-8f19-34c15b8b208e","Type":"ContainerStarted","Data":"736031cff3743d47e9f1836448b9fe68fd1449177dcc82902e655e1cbea08cc5"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.258189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" event={"ID":"05bfd83c-3643-4dc8-bd25-2204bc8bc8f6","Type":"ContainerStarted","Data":"5d7b4ac026831bf218ab78d78dbf80a9fbc49bdc058bc4ebb75440aaba8ccc52"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.259338 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" event={"ID":"c08a88ee-75c4-450b-8cc0-6159127f6a8c","Type":"ContainerStarted","Data":"14487da462c914a8cd42b5e98b14bd68d7672124cc5bba26d98c6cd3f163a57e"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.260647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" event={"ID":"e46a2036-66cd-420c-9920-a3e8ef0e17df","Type":"ContainerStarted","Data":"62f601e0dab8b17f325e6b94efc6b6351fb2e30895c49ba35b2453ecd55724ef"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.261656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" event={"ID":"109a952b-18eb-4217-884d-f40b3be18878","Type":"ContainerStarted","Data":"5f829d375c9d9f044c007fcb9271a7af0c57d8f7611d3404ec1efdbbf322becc"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.262857 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" event={"ID":"d11ece89-3325-4b95-aac8-776e2eaffecb","Type":"ContainerStarted","Data":"64cfae5e2b7cd013bab6a445db12caa3080c68f6af69bde8a074d80d89ad18bf"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.264220 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" event={"ID":"e8a69b24-8304-4447-b76f-e98e93cb7715","Type":"ContainerStarted","Data":"475324c4616cc7ac66f0d331d927bc457e194999a494ead147e2c8782a75540c"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.265463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" event={"ID":"f509e9a0-673f-45ba-a4f5-f3f5834ac86a","Type":"ContainerStarted","Data":"bcf0152ea5e2258036e7421c70fb5027bae05620ae576a5ab4845c1bf0bfd12d"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.266456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" event={"ID":"68495243-fc02-458e-af78-61702a2dda83","Type":"ContainerStarted","Data":"44c39cc92c479274372da50526f0de29042620451850e3fa3c2e0d1a3f7c1b68"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.270442 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" event={"ID":"abcee2d9-1cac-4e62-88a6-79b249832e9b","Type":"ContainerStarted","Data":"da6c678237407f384b464fac038f3cf8ae429a1037ef845b47ff2edf8853d714"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.271928 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" event={"ID":"8e9001cb-7a62-4617-8143-f4a51ad1c13a","Type":"ContainerStarted","Data":"a65901135e38d0f1286c630e67295308ec94cb036611c60b5d7f983f01601577"} Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.411689 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.419554 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-rph9c"] Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.430484 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76"] Dec 06 07:18:48 crc kubenswrapper[4895]: W1206 07:18:48.431294 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fb63748_1c10_4a17_9dcc_862fc1b29b46.slice/crio-e3504a21025c6ebdebc8ce81f63429720660beb02743e3ee06787ec032475b1e WatchSource:0}: Error finding container e3504a21025c6ebdebc8ce81f63429720660beb02743e3ee06787ec032475b1e: Status 404 returned error can't find the container with id e3504a21025c6ebdebc8ce81f63429720660beb02743e3ee06787ec032475b1e Dec 06 07:18:48 crc kubenswrapper[4895]: W1206 07:18:48.432745 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd61c0a11_5736_4747_889a_6dd520cbe269.slice/crio-76566f4f366ce6903b840de8fc5615e5aeaf06dccf889758e3bd5991435ef5d4 WatchSource:0}: Error finding container 76566f4f366ce6903b840de8fc5615e5aeaf06dccf889758e3bd5991435ef5d4: Status 404 returned error can't find the container with id 76566f4f366ce6903b840de8fc5615e5aeaf06dccf889758e3bd5991435ef5d4 Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.447007 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2"] Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.450066 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qsb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-2lbh2_openstack-operators(d61c0a11-5736-4747-889a-6dd520cbe269): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.454328 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qsb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-2lbh2_openstack-operators(d61c0a11-5736-4747-889a-6dd520cbe269): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.457315 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g"] Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.457448 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" podUID="d61c0a11-5736-4747-889a-6dd520cbe269" Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.464639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg"] Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.472957 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tjnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tbd9g_openstack-operators(88ab7b06-3be3-44a1-acbf-8ba5ced20251): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.473031 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt"] Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.474829 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tjnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tbd9g_openstack-operators(88ab7b06-3be3-44a1-acbf-8ba5ced20251): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.476108 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" podUID="88ab7b06-3be3-44a1-acbf-8ba5ced20251" Dec 06 07:18:48 crc kubenswrapper[4895]: W1206 07:18:48.477304 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5417e33f_dead_459e_933b_58ad3ae2da48.slice/crio-707ef6335fb0f678fba92addb06a35beccb09873c90a10165156a424f4875fe0 WatchSource:0}: Error finding container 707ef6335fb0f678fba92addb06a35beccb09873c90a10165156a424f4875fe0: Status 404 returned error can't find the container with id 707ef6335fb0f678fba92addb06a35beccb09873c90a10165156a424f4875fe0 Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.478652 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h"] Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.487094 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9f4qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-qk4xg_openstack-operators(5417e33f-dead-459e-933b-58ad3ae2da48): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.487998 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp"] Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.488742 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhrgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-gv8vt_openstack-operators(98110cff-712b-414c-9965-14d895f4b99f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.489336 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9f4qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-qk4xg_openstack-operators(5417e33f-dead-459e-933b-58ad3ae2da48): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.490135 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrbp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-dq42h_openstack-operators(3e2fc835-9cf2-4e21-a1fc-d76cfafba632): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.490541 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" podUID="5417e33f-dead-459e-933b-58ad3ae2da48" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.493236 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhrgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-gv8vt_openstack-operators(98110cff-712b-414c-9965-14d895f4b99f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.493328 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrbp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-dq42h_openstack-operators(3e2fc835-9cf2-4e21-a1fc-d76cfafba632): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.494394 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" podUID="3e2fc835-9cf2-4e21-a1fc-d76cfafba632" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.494408 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" podUID="98110cff-712b-414c-9965-14d895f4b99f" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.495132 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj6wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-czmdp_openstack-operators(d5cafedb-1052-4cd3-9212-3f642e07c18d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.496535 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" podUID="d5cafedb-1052-4cd3-9212-3f642e07c18d" Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.594354 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:48 crc kubenswrapper[4895]: I1206 07:18:48.594881 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.594507 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.594950 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.595046 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:50.59500666 +0000 UTC m=+1292.996395530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "webhook-server-cert" not found Dec 06 07:18:48 crc kubenswrapper[4895]: E1206 07:18:48.595085 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:50.595070362 +0000 UTC m=+1292.996459232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "metrics-server-cert" not found Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.279571 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" event={"ID":"fa381d85-af76-4af0-a49a-1722c746f7c2","Type":"ContainerStarted","Data":"e5481913ea497f191c02d5549f49de887d98aff6273233ba54717425787c12b6"} Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.281014 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" event={"ID":"98110cff-712b-414c-9965-14d895f4b99f","Type":"ContainerStarted","Data":"f5afcc7a80048d77f630ee901d29c62a186a10f293d32af265478e44f5cd2633"} Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.282039 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" event={"ID":"5fb63748-1c10-4a17-9dcc-862fc1b29b46","Type":"ContainerStarted","Data":"e3504a21025c6ebdebc8ce81f63429720660beb02743e3ee06787ec032475b1e"} Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.283535 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" event={"ID":"d61c0a11-5736-4747-889a-6dd520cbe269","Type":"ContainerStarted","Data":"76566f4f366ce6903b840de8fc5615e5aeaf06dccf889758e3bd5991435ef5d4"} Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.284356 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" podUID="98110cff-712b-414c-9965-14d895f4b99f" Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.285419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" event={"ID":"49bde36b-bda3-4622-87b2-6df2a2bee7f7","Type":"ContainerStarted","Data":"c0ad5f43286d4608f8501490517ad6712af5f7ec583d50045c40a6bd962143b7"} Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.285455 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" podUID="d61c0a11-5736-4747-889a-6dd520cbe269" Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.286419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" event={"ID":"88ab7b06-3be3-44a1-acbf-8ba5ced20251","Type":"ContainerStarted","Data":"6371f336fc30c3672c79dd93ea48e7889c24ff67c9faa1ac1284e070d8458bda"} Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.287436 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" event={"ID":"3e2fc835-9cf2-4e21-a1fc-d76cfafba632","Type":"ContainerStarted","Data":"9aaa5612e5bb2cc688ae8e355295b15b20cb6bb940c7321f0816fe19a483dcd3"} Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.289073 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" podUID="88ab7b06-3be3-44a1-acbf-8ba5ced20251" Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.289655 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" podUID="3e2fc835-9cf2-4e21-a1fc-d76cfafba632" Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.289659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" event={"ID":"d5cafedb-1052-4cd3-9212-3f642e07c18d","Type":"ContainerStarted","Data":"0eab420493e59f6861747332d876a08f627bc3c7f4df6c3222aa6fd637d63ed0"} Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.290648 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" podUID="d5cafedb-1052-4cd3-9212-3f642e07c18d" Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.291551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" event={"ID":"5417e33f-dead-459e-933b-58ad3ae2da48","Type":"ContainerStarted","Data":"707ef6335fb0f678fba92addb06a35beccb09873c90a10165156a424f4875fe0"} Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.293658 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" podUID="5417e33f-dead-459e-933b-58ad3ae2da48" Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.612135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.612281 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.612344 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert podName:512638f7-8e17-493b-a34b-3da3c65f445a nodeName:}" failed. No retries permitted until 2025-12-06 07:18:53.612326309 +0000 UTC m=+1296.013715179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert") pod "infra-operator-controller-manager-57548d458d-6cv59" (UID: "512638f7-8e17-493b-a34b-3da3c65f445a") : secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:49 crc kubenswrapper[4895]: I1206 07:18:49.925307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.930049 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:49 crc kubenswrapper[4895]: E1206 07:18:49.930148 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert podName:a8ffcb7e-0e4b-42c3-b778-4706cbd59792 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:53.930125054 +0000 UTC m=+1296.331513924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f522sgf" (UID: "a8ffcb7e-0e4b-42c3-b778-4706cbd59792") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.308630 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" podUID="d5cafedb-1052-4cd3-9212-3f642e07c18d" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.308821 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" podUID="3e2fc835-9cf2-4e21-a1fc-d76cfafba632" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.308874 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" podUID="88ab7b06-3be3-44a1-acbf-8ba5ced20251" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.309824 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" podUID="5417e33f-dead-459e-933b-58ad3ae2da48" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.309880 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" podUID="98110cff-712b-414c-9965-14d895f4b99f" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.310200 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" podUID="d61c0a11-5736-4747-889a-6dd520cbe269" Dec 06 07:18:50 crc kubenswrapper[4895]: I1206 07:18:50.644175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.644448 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 07:18:50 crc kubenswrapper[4895]: I1206 07:18:50.645009 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.645076 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:54.645046434 +0000 UTC m=+1297.046435304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "webhook-server-cert" not found Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.645273 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 07:18:50 crc kubenswrapper[4895]: E1206 07:18:50.645382 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:18:54.645357964 +0000 UTC m=+1297.046747014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "metrics-server-cert" not found Dec 06 07:18:53 crc kubenswrapper[4895]: I1206 07:18:53.704274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:18:53 crc kubenswrapper[4895]: E1206 07:18:53.704591 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:53 crc kubenswrapper[4895]: E1206 07:18:53.705027 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert podName:512638f7-8e17-493b-a34b-3da3c65f445a nodeName:}" failed. No retries permitted until 2025-12-06 07:19:01.704999877 +0000 UTC m=+1304.106388747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert") pod "infra-operator-controller-manager-57548d458d-6cv59" (UID: "512638f7-8e17-493b-a34b-3da3c65f445a") : secret "infra-operator-webhook-server-cert" not found Dec 06 07:18:54 crc kubenswrapper[4895]: I1206 07:18:54.015943 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:18:54 crc kubenswrapper[4895]: E1206 07:18:54.016225 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:54 crc kubenswrapper[4895]: E1206 07:18:54.016363 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert podName:a8ffcb7e-0e4b-42c3-b778-4706cbd59792 nodeName:}" failed. No retries permitted until 2025-12-06 07:19:02.016328457 +0000 UTC m=+1304.417717327 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f522sgf" (UID: "a8ffcb7e-0e4b-42c3-b778-4706cbd59792") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:18:54 crc kubenswrapper[4895]: I1206 07:18:54.727076 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:54 crc kubenswrapper[4895]: I1206 07:18:54.727215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:18:54 crc kubenswrapper[4895]: E1206 07:18:54.727422 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 07:18:54 crc kubenswrapper[4895]: E1206 07:18:54.727510 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:19:02.727488418 +0000 UTC m=+1305.128877288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "webhook-server-cert" not found Dec 06 07:18:54 crc kubenswrapper[4895]: E1206 07:18:54.728018 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 07:18:54 crc kubenswrapper[4895]: E1206 07:18:54.728052 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:19:02.728044234 +0000 UTC m=+1305.129433104 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "metrics-server-cert" not found Dec 06 07:18:59 crc kubenswrapper[4895]: I1206 07:18:59.696135 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:18:59 crc kubenswrapper[4895]: I1206 07:18:59.696744 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:19:01 crc kubenswrapper[4895]: I1206 07:19:01.712896 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:19:01 crc kubenswrapper[4895]: E1206 07:19:01.713073 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 07:19:01 crc kubenswrapper[4895]: E1206 07:19:01.713322 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert podName:512638f7-8e17-493b-a34b-3da3c65f445a nodeName:}" failed. No retries permitted until 2025-12-06 07:19:17.713303033 +0000 UTC m=+1320.114691903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert") pod "infra-operator-controller-manager-57548d458d-6cv59" (UID: "512638f7-8e17-493b-a34b-3da3c65f445a") : secret "infra-operator-webhook-server-cert" not found Dec 06 07:19:02 crc kubenswrapper[4895]: I1206 07:19:02.018261 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.018571 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.018627 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert podName:a8ffcb7e-0e4b-42c3-b778-4706cbd59792 nodeName:}" failed. No retries permitted until 2025-12-06 07:19:18.018610236 +0000 UTC m=+1320.419999106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f522sgf" (UID: "a8ffcb7e-0e4b-42c3-b778-4706cbd59792") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.338773 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.338973 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v2p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-8tr76_openstack-operators(fa381d85-af76-4af0-a49a-1722c746f7c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:02 crc kubenswrapper[4895]: I1206 07:19:02.729198 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:02 crc kubenswrapper[4895]: I1206 07:19:02.729322 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.729410 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.729505 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:19:18.729487403 +0000 UTC m=+1321.130876273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "metrics-server-cert" not found Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.729520 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 07:19:02 crc kubenswrapper[4895]: E1206 07:19:02.729587 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs podName:14ae4729-3f50-4990-9b10-8a06e7e78060 nodeName:}" failed. No retries permitted until 2025-12-06 07:19:18.729568435 +0000 UTC m=+1321.130957365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-6h5v6" (UID: "14ae4729-3f50-4990-9b10-8a06e7e78060") : secret "webhook-server-cert" not found Dec 06 07:19:03 crc kubenswrapper[4895]: E1206 07:19:03.341825 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 06 07:19:03 crc kubenswrapper[4895]: E1206 07:19:03.342369 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z98fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-cgv2z_openstack-operators(0d925507-837e-438e-8f19-34c15b8b208e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:07 crc kubenswrapper[4895]: E1206 07:19:07.736991 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 06 07:19:07 crc kubenswrapper[4895]: E1206 07:19:07.737739 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ks5pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-8m9qs_openstack-operators(abcee2d9-1cac-4e62-88a6-79b249832e9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:08 crc kubenswrapper[4895]: E1206 07:19:08.441096 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Dec 06 07:19:08 crc kubenswrapper[4895]: E1206 07:19:08.441624 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x58wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-gzb86_openstack-operators(49bde36b-bda3-4622-87b2-6df2a2bee7f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:16 crc kubenswrapper[4895]: E1206 07:19:16.134781 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 06 07:19:16 crc kubenswrapper[4895]: E1206 07:19:16.136075 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tr4nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-rph9c_openstack-operators(5fb63748-1c10-4a17-9dcc-862fc1b29b46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:17 crc kubenswrapper[4895]: E1206 07:19:17.230499 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 06 07:19:17 crc kubenswrapper[4895]: E1206 07:19:17.230947 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6wfvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-dk54h_openstack-operators(c08a88ee-75c4-450b-8cc0-6159127f6a8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:17 crc kubenswrapper[4895]: I1206 07:19:17.781268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:19:17 crc kubenswrapper[4895]: I1206 07:19:17.787768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/512638f7-8e17-493b-a34b-3da3c65f445a-cert\") pod \"infra-operator-controller-manager-57548d458d-6cv59\" (UID: \"512638f7-8e17-493b-a34b-3da3c65f445a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:19:17 crc kubenswrapper[4895]: I1206 07:19:17.969444 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.086080 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.091458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8ffcb7e-0e4b-42c3-b778-4706cbd59792-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f522sgf\" (UID: \"a8ffcb7e-0e4b-42c3-b778-4706cbd59792\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.198330 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-djdm7" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.207793 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:19:18 crc kubenswrapper[4895]: E1206 07:19:18.279030 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 06 07:19:18 crc kubenswrapper[4895]: E1206 07:19:18.279215 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmzmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-4p4x6_openstack-operators(e8a69b24-8304-4447-b76f-e98e93cb7715): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.828583 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.829146 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.832591 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.833653 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14ae4729-3f50-4990-9b10-8a06e7e78060-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-6h5v6\" (UID: \"14ae4729-3f50-4990-9b10-8a06e7e78060\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:18 crc kubenswrapper[4895]: I1206 07:19:18.993542 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-68mj2" Dec 06 07:19:19 crc kubenswrapper[4895]: I1206 07:19:19.000806 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:19 crc kubenswrapper[4895]: E1206 07:19:19.073835 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 06 07:19:19 crc kubenswrapper[4895]: E1206 07:19:19.074025 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58s44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-dmspk_openstack-operators(05bfd83c-3643-4dc8-bd25-2204bc8bc8f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:29 crc kubenswrapper[4895]: I1206 07:19:29.696406 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:19:29 crc kubenswrapper[4895]: I1206 07:19:29.697184 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:19:36 crc kubenswrapper[4895]: E1206 07:19:36.020169 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 06 07:19:36 crc kubenswrapper[4895]: E1206 07:19:36.021140 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhrgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-gv8vt_openstack-operators(98110cff-712b-414c-9965-14d895f4b99f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:37 crc kubenswrapper[4895]: E1206 07:19:37.350840 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 06 07:19:37 crc kubenswrapper[4895]: E1206 07:19:37.351041 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qsb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-2lbh2_openstack-operators(d61c0a11-5736-4747-889a-6dd520cbe269): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:38 crc kubenswrapper[4895]: E1206 07:19:38.338322 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 06 07:19:38 crc kubenswrapper[4895]: E1206 07:19:38.338899 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrbp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-dq42h_openstack-operators(3e2fc835-9cf2-4e21-a1fc-d76cfafba632): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:40 crc kubenswrapper[4895]: E1206 07:19:40.582300 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 06 07:19:40 crc kubenswrapper[4895]: E1206 07:19:40.582508 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tjnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tbd9g_openstack-operators(88ab7b06-3be3-44a1-acbf-8ba5ced20251): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:43 crc kubenswrapper[4895]: E1206 07:19:43.758141 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 06 07:19:43 crc kubenswrapper[4895]: E1206 07:19:43.758898 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5qrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-d65l6_openstack-operators(8e9001cb-7a62-4617-8143-f4a51ad1c13a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.071919 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.072086 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z98fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-cgv2z_openstack-operators(0d925507-837e-438e-8f19-34c15b8b208e): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.073446 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" podUID="0d925507-837e-438e-8f19-34c15b8b208e" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.464168 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.465254 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tr4nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-rph9c_openstack-operators(5fb63748-1c10-4a17-9dcc-862fc1b29b46): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.466732 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" podUID="5fb63748-1c10-4a17-9dcc-862fc1b29b46" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.466826 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.467208 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj6wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-czmdp_openstack-operators(d5cafedb-1052-4cd3-9212-3f642e07c18d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.468361 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" podUID="d5cafedb-1052-4cd3-9212-3f642e07c18d" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.514404 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.515012 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ks5pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-8m9qs_openstack-operators(abcee2d9-1cac-4e62-88a6-79b249832e9b): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.517648 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" podUID="abcee2d9-1cac-4e62-88a6-79b249832e9b" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.537049 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.537282 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6wfvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-dk54h_openstack-operators(c08a88ee-75c4-450b-8cc0-6159127f6a8c): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 07:19:44 crc kubenswrapper[4895]: E1206 07:19:44.538456 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" podUID="c08a88ee-75c4-450b-8cc0-6159127f6a8c" Dec 06 07:19:44 crc kubenswrapper[4895]: I1206 07:19:44.958292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-6cv59"] Dec 06 07:19:45 crc kubenswrapper[4895]: E1206 07:19:45.063320 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 07:19:45 crc kubenswrapper[4895]: E1206 07:19:45.063528 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v2p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-8tr76_openstack-operators(fa381d85-af76-4af0-a49a-1722c746f7c2): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 07:19:45 crc kubenswrapper[4895]: E1206 07:19:45.066435 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" podUID="fa381d85-af76-4af0-a49a-1722c746f7c2" Dec 06 07:19:45 crc kubenswrapper[4895]: E1206 07:19:45.072299 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 07:19:45 crc kubenswrapper[4895]: E1206 07:19:45.072562 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x58wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-gzb86_openstack-operators(49bde36b-bda3-4622-87b2-6df2a2bee7f7): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 07:19:45 crc kubenswrapper[4895]: E1206 07:19:45.074245 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" podUID="49bde36b-bda3-4622-87b2-6df2a2bee7f7" Dec 06 07:19:45 crc kubenswrapper[4895]: I1206 07:19:45.442433 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6"] Dec 06 07:19:45 crc kubenswrapper[4895]: I1206 07:19:45.573087 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf"] Dec 06 07:19:45 crc kubenswrapper[4895]: I1206 07:19:45.881732 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" event={"ID":"e46a2036-66cd-420c-9920-a3e8ef0e17df","Type":"ContainerStarted","Data":"13e72411fa4e6fbbf1e5397232bdf5b96fdf1270ec08e8feeb750283d37f5aa8"} Dec 06 07:19:45 crc kubenswrapper[4895]: I1206 07:19:45.883078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" event={"ID":"512638f7-8e17-493b-a34b-3da3c65f445a","Type":"ContainerStarted","Data":"3545cbbf63853b77c568d74bc3f7b66ec6fe3dba503ed2171572feaef7d24208"} Dec 06 07:19:45 crc kubenswrapper[4895]: I1206 07:19:45.887062 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" event={"ID":"d11ece89-3325-4b95-aac8-776e2eaffecb","Type":"ContainerStarted","Data":"298896be9d1fa1bb45abe50cb3a1194728307119183c4e8a9e60941da0a0c931"} Dec 06 07:19:45 crc kubenswrapper[4895]: I1206 07:19:45.889663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" event={"ID":"f509e9a0-673f-45ba-a4f5-f3f5834ac86a","Type":"ContainerStarted","Data":"6ff62008869ab1c1d3af252c67d92dfdd601664f6091431cc583556c28a8febf"} Dec 06 07:19:46 crc kubenswrapper[4895]: W1206 07:19:46.185621 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14ae4729_3f50_4990_9b10_8a06e7e78060.slice/crio-b872ef9bd0e07bf767f160e7351d9fdbdd4365c21ac7899e2307c12fa0b29918 WatchSource:0}: Error finding container b872ef9bd0e07bf767f160e7351d9fdbdd4365c21ac7899e2307c12fa0b29918: Status 404 returned error can't find the container with id b872ef9bd0e07bf767f160e7351d9fdbdd4365c21ac7899e2307c12fa0b29918 Dec 06 07:19:46 crc kubenswrapper[4895]: E1206 07:19:46.735659 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" podUID="abcee2d9-1cac-4e62-88a6-79b249832e9b" Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.902886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" event={"ID":"14ae4729-3f50-4990-9b10-8a06e7e78060","Type":"ContainerStarted","Data":"88b5aca23135917505281dac9c187f0aefda82322378ffca3e42cd3702a473ab"} Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.902937 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" event={"ID":"14ae4729-3f50-4990-9b10-8a06e7e78060","Type":"ContainerStarted","Data":"b872ef9bd0e07bf767f160e7351d9fdbdd4365c21ac7899e2307c12fa0b29918"} Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.903759 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.908147 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" event={"ID":"109a952b-18eb-4217-884d-f40b3be18878","Type":"ContainerStarted","Data":"f910e90d330ac3e4494bd52f5231f66882213f6e81f81920d81d811061bf92ba"} Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.936694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" event={"ID":"68495243-fc02-458e-af78-61702a2dda83","Type":"ContainerStarted","Data":"e7bf19242f710a3cafe7f1579e287a19994c2dd8c5a14d873c23e9f4c539ac94"} Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.950921 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" podStartSLOduration=60.950901707 podStartE2EDuration="1m0.950901707s" podCreationTimestamp="2025-12-06 07:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:19:46.949828707 +0000 UTC m=+1349.351217577" watchObservedRunningTime="2025-12-06 07:19:46.950901707 +0000 UTC m=+1349.352290577" Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.959164 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" event={"ID":"abcee2d9-1cac-4e62-88a6-79b249832e9b","Type":"ContainerStarted","Data":"794fd7491b26663bc6627b1dc1dac940424128c45c5d941043bd6fb7ef9c9eea"} Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.960667 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.962266 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" event={"ID":"5417e33f-dead-459e-933b-58ad3ae2da48","Type":"ContainerStarted","Data":"dc29082412a18d5c3d5cde266a9c25c44059d01344ae53d7c5567e38911e7f4d"} Dec 06 07:19:46 crc kubenswrapper[4895]: E1206 07:19:46.964636 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" podUID="abcee2d9-1cac-4e62-88a6-79b249832e9b" Dec 06 07:19:46 crc kubenswrapper[4895]: I1206 07:19:46.965787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" event={"ID":"a8ffcb7e-0e4b-42c3-b778-4706cbd59792","Type":"ContainerStarted","Data":"a44641e133268d1c7d3a0cda53bd965d7a0e9a173c828233ee6e2bf1d6eb4d72"} Dec 06 07:19:47 crc kubenswrapper[4895]: E1206 07:19:47.984560 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" podUID="abcee2d9-1cac-4e62-88a6-79b249832e9b" Dec 06 07:19:48 crc kubenswrapper[4895]: E1206 07:19:48.134240 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" podUID="5fb63748-1c10-4a17-9dcc-862fc1b29b46" Dec 06 07:19:48 crc kubenswrapper[4895]: E1206 07:19:48.227998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" podUID="0d925507-837e-438e-8f19-34c15b8b208e" Dec 06 07:19:49 crc kubenswrapper[4895]: I1206 07:19:49.003290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" event={"ID":"5fb63748-1c10-4a17-9dcc-862fc1b29b46","Type":"ContainerStarted","Data":"8aef87c72cda2dfe7f7fe58c2fb8287774ef8ae50c6cde7ff9ef77d539e5d33d"} Dec 06 07:19:49 crc kubenswrapper[4895]: I1206 07:19:49.004342 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:19:49 crc kubenswrapper[4895]: E1206 07:19:49.007858 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" podUID="5fb63748-1c10-4a17-9dcc-862fc1b29b46" Dec 06 07:19:49 crc kubenswrapper[4895]: I1206 07:19:49.031274 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" event={"ID":"0d925507-837e-438e-8f19-34c15b8b208e","Type":"ContainerStarted","Data":"0d4868c27af15c157f9d66ae62c1eb71b5199ab4d1428b0e404ce5d404e007c6"} Dec 06 07:19:49 crc kubenswrapper[4895]: I1206 07:19:49.031896 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:19:49 crc kubenswrapper[4895]: E1206 07:19:49.036720 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" podUID="0d925507-837e-438e-8f19-34c15b8b208e" Dec 06 07:19:50 crc kubenswrapper[4895]: E1206 07:19:50.052988 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" podUID="0d925507-837e-438e-8f19-34c15b8b208e" Dec 06 07:19:50 crc kubenswrapper[4895]: E1206 07:19:50.053374 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" podUID="5fb63748-1c10-4a17-9dcc-862fc1b29b46" Dec 06 07:19:50 crc kubenswrapper[4895]: E1206 07:19:50.411004 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" podUID="c08a88ee-75c4-450b-8cc0-6159127f6a8c" Dec 06 07:19:51 crc kubenswrapper[4895]: I1206 07:19:51.060124 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" event={"ID":"c08a88ee-75c4-450b-8cc0-6159127f6a8c","Type":"ContainerStarted","Data":"a7aa0809ec5671943b9352b8716538dfbe4dc5fb7e14e102f6bf11bd111bdc36"} Dec 06 07:19:51 crc kubenswrapper[4895]: I1206 07:19:51.060374 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:19:51 crc kubenswrapper[4895]: E1206 07:19:51.062094 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" podUID="c08a88ee-75c4-450b-8cc0-6159127f6a8c" Dec 06 07:19:52 crc kubenswrapper[4895]: E1206 07:19:52.070529 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" podUID="c08a88ee-75c4-450b-8cc0-6159127f6a8c" Dec 06 07:19:55 crc kubenswrapper[4895]: I1206 07:19:55.980463 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.053374 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" podUID="d5cafedb-1052-4cd3-9212-3f642e07c18d" Dec 06 07:19:56 crc kubenswrapper[4895]: I1206 07:19:56.111600 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" event={"ID":"a8ffcb7e-0e4b-42c3-b778-4706cbd59792","Type":"ContainerStarted","Data":"4323701eaf669fe1db123aaea2ce40f7acc94ad3fc77aeb0b7e8d1340958f251"} Dec 06 07:19:56 crc kubenswrapper[4895]: I1206 07:19:56.211903 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.306407 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" podUID="d61c0a11-5736-4747-889a-6dd520cbe269" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.307387 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" podUID="05bfd83c-3643-4dc8-bd25-2204bc8bc8f6" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.307575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" podUID="98110cff-712b-414c-9965-14d895f4b99f" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.497961 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" podUID="e8a69b24-8304-4447-b76f-e98e93cb7715" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.504956 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" podUID="8e9001cb-7a62-4617-8143-f4a51ad1c13a" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.627294 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" podUID="88ab7b06-3be3-44a1-acbf-8ba5ced20251" Dec 06 07:19:56 crc kubenswrapper[4895]: I1206 07:19:56.632543 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" Dec 06 07:19:56 crc kubenswrapper[4895]: E1206 07:19:56.688837 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" podUID="3e2fc835-9cf2-4e21-a1fc-d76cfafba632" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.124296 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.132003 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" event={"ID":"3e2fc835-9cf2-4e21-a1fc-d76cfafba632","Type":"ContainerStarted","Data":"66d47edf559b2a05bcf76c12de91b36fecb02d1c27fb38be523af62401701387"} Dec 06 07:19:57 crc kubenswrapper[4895]: E1206 07:19:57.133955 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" podUID="3e2fc835-9cf2-4e21-a1fc-d76cfafba632" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.137722 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" event={"ID":"05bfd83c-3643-4dc8-bd25-2204bc8bc8f6","Type":"ContainerStarted","Data":"2bf09d1cc81487f1edf583f2b20d2c8d43aa1b00eaae07bac09dd8e7989d6f1f"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.155737 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" event={"ID":"68495243-fc02-458e-af78-61702a2dda83","Type":"ContainerStarted","Data":"e9baf17202cb84f0a2d5c92ebb9e6d369d236c457f6fa50135bb7fc6674a003b"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.156701 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.162568 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.172973 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" event={"ID":"d61c0a11-5736-4747-889a-6dd520cbe269","Type":"ContainerStarted","Data":"e402aca3678dd8965660b94e6512c803e5486b4fe5687fbb006fadf4d76d7fc5"} Dec 06 07:19:57 crc kubenswrapper[4895]: E1206 07:19:57.174782 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" podUID="d61c0a11-5736-4747-889a-6dd520cbe269" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.187148 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" event={"ID":"e46a2036-66cd-420c-9920-a3e8ef0e17df","Type":"ContainerStarted","Data":"a2f8571394eca1aba15f5d3ce763c575d7da9d14de4e74f21d7c185c8ddeaeaf"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.188549 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.196785 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.203884 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" event={"ID":"512638f7-8e17-493b-a34b-3da3c65f445a","Type":"ContainerStarted","Data":"0b3d828cb7527e454a5585fd07e56fe699e71331bd0c87ebbe9d2211d466ec89"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.203938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" event={"ID":"512638f7-8e17-493b-a34b-3da3c65f445a","Type":"ContainerStarted","Data":"2939fec3ce19dcf758e37d11f6e425ed0c145248719c2332039c70605980a554"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.204097 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.216104 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" event={"ID":"a8ffcb7e-0e4b-42c3-b778-4706cbd59792","Type":"ContainerStarted","Data":"f50f56981780f66e5770139c1a1f589fd57725bbef87e2a86662a4baeb5be57e"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.216393 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.232690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" event={"ID":"e8a69b24-8304-4447-b76f-e98e93cb7715","Type":"ContainerStarted","Data":"576647b2da429006856b33312fc2e2a56652f61dbda3b370027386cf7746b28a"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.240527 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f6j2r" podStartSLOduration=4.5735345259999995 podStartE2EDuration="1m12.240509411s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.091667446 +0000 UTC m=+1290.493056316" lastFinishedPulling="2025-12-06 07:19:55.758642321 +0000 UTC m=+1358.160031201" observedRunningTime="2025-12-06 07:19:57.237237402 +0000 UTC m=+1359.638626272" watchObservedRunningTime="2025-12-06 07:19:57.240509411 +0000 UTC m=+1359.641898281" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.249402 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" event={"ID":"88ab7b06-3be3-44a1-acbf-8ba5ced20251","Type":"ContainerStarted","Data":"b46dd3eafbc26cbcb80a2ed64ecc1fbd13c803b0efcfe073bd4a1645876f8149"} Dec 06 07:19:57 crc kubenswrapper[4895]: E1206 07:19:57.255086 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" podUID="88ab7b06-3be3-44a1-acbf-8ba5ced20251" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.256315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" event={"ID":"8e9001cb-7a62-4617-8143-f4a51ad1c13a","Type":"ContainerStarted","Data":"62249fe0b9b238e587084be5521d7dc66e2250c6d2c9ac4a4b9075f30251a085"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.318602 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" event={"ID":"fa381d85-af76-4af0-a49a-1722c746f7c2","Type":"ContainerStarted","Data":"5a7b3e8fa4701349222b5376df45e091c311f52646a32655ef44c03c52a02fc1"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.340728 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rt4b2" podStartSLOduration=4.132464179 podStartE2EDuration="1m12.340559471s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:47.574722582 +0000 UTC m=+1289.976111452" lastFinishedPulling="2025-12-06 07:19:55.782817874 +0000 UTC m=+1358.184206744" observedRunningTime="2025-12-06 07:19:57.292232337 +0000 UTC m=+1359.693621207" watchObservedRunningTime="2025-12-06 07:19:57.340559471 +0000 UTC m=+1359.741948341" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.346817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" event={"ID":"98110cff-712b-414c-9965-14d895f4b99f","Type":"ContainerStarted","Data":"8ed2f7b3899709249e146b015f82d2eca50ebf37ce83f9541d789c0f4fe401b7"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.381293 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" event={"ID":"109a952b-18eb-4217-884d-f40b3be18878","Type":"ContainerStarted","Data":"8a2c27e594405e24c87f479b8f2f21a87d5fa325896fc1c189ea20364b4fa2cf"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.382197 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.385409 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.386957 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" event={"ID":"d11ece89-3325-4b95-aac8-776e2eaffecb","Type":"ContainerStarted","Data":"2dccc1a0ba5fb7d7e5c6e68818f9908314e1e765a1593622c8cf88e265ca603f"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.388092 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.396748 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.399333 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" event={"ID":"49bde36b-bda3-4622-87b2-6df2a2bee7f7","Type":"ContainerStarted","Data":"f2ee4e26011ff5c21def4aeaf1c974214297bc9f2c75fbaadfbeeb826ca0ffc4"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.400935 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" event={"ID":"f509e9a0-673f-45ba-a4f5-f3f5834ac86a","Type":"ContainerStarted","Data":"3c807d944bb1b895f00c997f8d4e0e571c83f41f105cbf0287ca3270112502b3"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.407032 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.413767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.435009 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" podStartSLOduration=61.890911156 podStartE2EDuration="1m12.43497881s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:19:45.128647778 +0000 UTC m=+1347.530036648" lastFinishedPulling="2025-12-06 07:19:55.672715432 +0000 UTC m=+1358.074104302" observedRunningTime="2025-12-06 07:19:57.414142528 +0000 UTC m=+1359.815531398" watchObservedRunningTime="2025-12-06 07:19:57.43497881 +0000 UTC m=+1359.836367690" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.474637 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" event={"ID":"abcee2d9-1cac-4e62-88a6-79b249832e9b","Type":"ContainerStarted","Data":"915eea721deaf49756d083aaec3a3de2525d5d084fc7d956f69be57f001681e5"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.529636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" event={"ID":"5417e33f-dead-459e-933b-58ad3ae2da48","Type":"ContainerStarted","Data":"bcd7bdc1e79e609e98a25992e2a6cf99bc825b1979518ce881b3d734c22b61b6"} Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.532061 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.532970 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.558440 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" podStartSLOduration=63.067867027 podStartE2EDuration="1m12.558422182s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:19:46.194020506 +0000 UTC m=+1348.595409376" lastFinishedPulling="2025-12-06 07:19:55.684575661 +0000 UTC m=+1358.085964531" observedRunningTime="2025-12-06 07:19:57.542823711 +0000 UTC m=+1359.944212581" watchObservedRunningTime="2025-12-06 07:19:57.558422182 +0000 UTC m=+1359.959811052" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.680042 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-5d9nk" podStartSLOduration=4.591591095 podStartE2EDuration="1m12.680023895s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:47.58351054 +0000 UTC m=+1289.984899420" lastFinishedPulling="2025-12-06 07:19:55.67194335 +0000 UTC m=+1358.073332220" observedRunningTime="2025-12-06 07:19:57.677800574 +0000 UTC m=+1360.079189444" watchObservedRunningTime="2025-12-06 07:19:57.680023895 +0000 UTC m=+1360.081412765" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.724370 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9r6m4" podStartSLOduration=5.063806615 podStartE2EDuration="1m12.724352111s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.051307432 +0000 UTC m=+1290.452696302" lastFinishedPulling="2025-12-06 07:19:55.711852928 +0000 UTC m=+1358.113241798" observedRunningTime="2025-12-06 07:19:57.723851988 +0000 UTC m=+1360.125240858" watchObservedRunningTime="2025-12-06 07:19:57.724352111 +0000 UTC m=+1360.125740981" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.789982 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8m9qs" podStartSLOduration=14.77349998 podStartE2EDuration="1m12.789958352s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.173670398 +0000 UTC m=+1290.575059268" lastFinishedPulling="2025-12-06 07:19:46.19012876 +0000 UTC m=+1348.591517640" observedRunningTime="2025-12-06 07:19:57.767067634 +0000 UTC m=+1360.168456504" watchObservedRunningTime="2025-12-06 07:19:57.789958352 +0000 UTC m=+1360.191347222" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.809454 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bc6fp" podStartSLOduration=5.13935911 podStartE2EDuration="1m12.809431957s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.113427255 +0000 UTC m=+1290.514816115" lastFinishedPulling="2025-12-06 07:19:55.783500092 +0000 UTC m=+1358.184888962" observedRunningTime="2025-12-06 07:19:57.807521656 +0000 UTC m=+1360.208910526" watchObservedRunningTime="2025-12-06 07:19:57.809431957 +0000 UTC m=+1360.210820827" Dec 06 07:19:57 crc kubenswrapper[4895]: I1206 07:19:57.860082 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" podStartSLOduration=5.536422283 podStartE2EDuration="1m12.860056684s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.486782167 +0000 UTC m=+1290.888171037" lastFinishedPulling="2025-12-06 07:19:55.810416568 +0000 UTC m=+1358.211805438" observedRunningTime="2025-12-06 07:19:57.852162631 +0000 UTC m=+1360.253551511" watchObservedRunningTime="2025-12-06 07:19:57.860056684 +0000 UTC m=+1360.261445554" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.559692 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" event={"ID":"5fb63748-1c10-4a17-9dcc-862fc1b29b46","Type":"ContainerStarted","Data":"fe89cb526980d3dcb6522342f26839b9d134ab334cb9d7a178e18a2d94e7cd84"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.561405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" event={"ID":"49bde36b-bda3-4622-87b2-6df2a2bee7f7","Type":"ContainerStarted","Data":"dc1d8c1533a3e1e3c9f2de8837f98f58c31ecc746d42d7add0e71730356c908f"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.561841 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.563399 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" event={"ID":"e8a69b24-8304-4447-b76f-e98e93cb7715","Type":"ContainerStarted","Data":"4db33686654f2c572dc57ef4e6af97ebf0a73a8bd32313da2113abe35a459482"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.563805 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.565761 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" event={"ID":"c08a88ee-75c4-450b-8cc0-6159127f6a8c","Type":"ContainerStarted","Data":"20e8ade6dfbe66533ea487bab32cf26c2e01f372a9970ff42f3d9375266569d1"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.577041 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" event={"ID":"98110cff-712b-414c-9965-14d895f4b99f","Type":"ContainerStarted","Data":"5b0f3a0d0dc6a38c0b8fcc9fa7309ff293ae19123041c6e2418c8841fe569c25"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.577842 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.588435 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rph9c" podStartSLOduration=13.183514603 podStartE2EDuration="1m12.588411724s" podCreationTimestamp="2025-12-06 07:18:46 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.434774767 +0000 UTC m=+1290.836163637" lastFinishedPulling="2025-12-06 07:19:47.839671888 +0000 UTC m=+1350.241060758" observedRunningTime="2025-12-06 07:19:58.583505332 +0000 UTC m=+1360.984894212" watchObservedRunningTime="2025-12-06 07:19:58.588411724 +0000 UTC m=+1360.989800594" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.589541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" event={"ID":"0d925507-837e-438e-8f19-34c15b8b208e","Type":"ContainerStarted","Data":"a56a711c23039687fd7bd428c21e97661ece3ed2327111c76a3081c9c5e89afc"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.601796 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" event={"ID":"05bfd83c-3643-4dc8-bd25-2204bc8bc8f6","Type":"ContainerStarted","Data":"9e64b38ea15f58ef765174130999f29aee365bbf155231b89f36a6d59e7ff116"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.602636 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.606905 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" event={"ID":"8e9001cb-7a62-4617-8143-f4a51ad1c13a","Type":"ContainerStarted","Data":"38cb4505ed29ce86571790fd1b4ffe423812b655b6782adb625ca688b959ba24"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.607667 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.612950 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" podStartSLOduration=3.708601618 podStartE2EDuration="1m13.612930086s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.173876714 +0000 UTC m=+1290.575265584" lastFinishedPulling="2025-12-06 07:19:58.078205182 +0000 UTC m=+1360.479594052" observedRunningTime="2025-12-06 07:19:58.609791942 +0000 UTC m=+1361.011180822" watchObservedRunningTime="2025-12-06 07:19:58.612930086 +0000 UTC m=+1361.014318956" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.621970 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" event={"ID":"fa381d85-af76-4af0-a49a-1722c746f7c2","Type":"ContainerStarted","Data":"deeabb8ec62630837034d2c8eca4281924972139d54a8a788fbcb4c3593df802"} Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.622021 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.636698 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" podStartSLOduration=4.04486622 podStartE2EDuration="1m13.636676837s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.488619926 +0000 UTC m=+1290.890008796" lastFinishedPulling="2025-12-06 07:19:58.080430543 +0000 UTC m=+1360.481819413" observedRunningTime="2025-12-06 07:19:58.634324773 +0000 UTC m=+1361.035713643" watchObservedRunningTime="2025-12-06 07:19:58.636676837 +0000 UTC m=+1361.038065707" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.668416 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dk54h" podStartSLOduration=14.004234895 podStartE2EDuration="1m13.668397974s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.177672207 +0000 UTC m=+1290.579061077" lastFinishedPulling="2025-12-06 07:19:47.841835286 +0000 UTC m=+1350.243224156" observedRunningTime="2025-12-06 07:19:58.662190766 +0000 UTC m=+1361.063579636" watchObservedRunningTime="2025-12-06 07:19:58.668397974 +0000 UTC m=+1361.069786844" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.693579 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" podStartSLOduration=6.181241596 podStartE2EDuration="1m12.693557312s" podCreationTimestamp="2025-12-06 07:18:46 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.44927341 +0000 UTC m=+1290.850662290" lastFinishedPulling="2025-12-06 07:19:54.961589136 +0000 UTC m=+1357.362978006" observedRunningTime="2025-12-06 07:19:58.683934293 +0000 UTC m=+1361.085323163" watchObservedRunningTime="2025-12-06 07:19:58.693557312 +0000 UTC m=+1361.094946182" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.708415 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" podStartSLOduration=6.694476256 podStartE2EDuration="1m13.708396063s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.452347383 +0000 UTC m=+1290.853736253" lastFinishedPulling="2025-12-06 07:19:55.46626719 +0000 UTC m=+1357.867656060" observedRunningTime="2025-12-06 07:19:58.707204971 +0000 UTC m=+1361.108593841" watchObservedRunningTime="2025-12-06 07:19:58.708396063 +0000 UTC m=+1361.109784933" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.746311 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgv2z" podStartSLOduration=14.050522269 podStartE2EDuration="1m13.746290806s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.13276297 +0000 UTC m=+1290.534151840" lastFinishedPulling="2025-12-06 07:19:47.828531507 +0000 UTC m=+1350.229920377" observedRunningTime="2025-12-06 07:19:58.74273862 +0000 UTC m=+1361.144127500" watchObservedRunningTime="2025-12-06 07:19:58.746290806 +0000 UTC m=+1361.147679676" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.775995 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" podStartSLOduration=3.8750401009999997 podStartE2EDuration="1m13.775974627s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.177704718 +0000 UTC m=+1290.579093588" lastFinishedPulling="2025-12-06 07:19:58.078639244 +0000 UTC m=+1360.480028114" observedRunningTime="2025-12-06 07:19:58.771808964 +0000 UTC m=+1361.173197844" watchObservedRunningTime="2025-12-06 07:19:58.775974627 +0000 UTC m=+1361.177363497" Dec 06 07:19:58 crc kubenswrapper[4895]: I1206 07:19:58.795334 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" podStartSLOduration=4.156847702 podStartE2EDuration="1m13.795315569s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.051595859 +0000 UTC m=+1290.452984729" lastFinishedPulling="2025-12-06 07:19:57.690063726 +0000 UTC m=+1360.091452596" observedRunningTime="2025-12-06 07:19:58.792994116 +0000 UTC m=+1361.194382986" watchObservedRunningTime="2025-12-06 07:19:58.795315569 +0000 UTC m=+1361.196704439" Dec 06 07:19:59 crc kubenswrapper[4895]: I1206 07:19:59.014646 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-6h5v6" Dec 06 07:19:59 crc kubenswrapper[4895]: I1206 07:19:59.696437 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:19:59 crc kubenswrapper[4895]: I1206 07:19:59.696596 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:19:59 crc kubenswrapper[4895]: I1206 07:19:59.696649 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:19:59 crc kubenswrapper[4895]: I1206 07:19:59.697397 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ccc9113d0ff0776606793bc5166b14a5fc6157da50c2a82c90bf46796771601"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:19:59 crc kubenswrapper[4895]: I1206 07:19:59.697454 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://6ccc9113d0ff0776606793bc5166b14a5fc6157da50c2a82c90bf46796771601" gracePeriod=600 Dec 06 07:20:00 crc kubenswrapper[4895]: I1206 07:20:00.657184 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="6ccc9113d0ff0776606793bc5166b14a5fc6157da50c2a82c90bf46796771601" exitCode=0 Dec 06 07:20:00 crc kubenswrapper[4895]: I1206 07:20:00.657258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"6ccc9113d0ff0776606793bc5166b14a5fc6157da50c2a82c90bf46796771601"} Dec 06 07:20:00 crc kubenswrapper[4895]: I1206 07:20:00.657632 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115"} Dec 06 07:20:00 crc kubenswrapper[4895]: I1206 07:20:00.657661 4895 scope.go:117] "RemoveContainer" containerID="663e0971fd031efe73576c4d0575b9ee19ff771a93759108ec40df72da6692c9" Dec 06 07:20:05 crc kubenswrapper[4895]: I1206 07:20:05.949194 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4p4x6" Dec 06 07:20:06 crc kubenswrapper[4895]: I1206 07:20:06.185919 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d65l6" Dec 06 07:20:06 crc kubenswrapper[4895]: I1206 07:20:06.540297 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-dmspk" Dec 06 07:20:06 crc kubenswrapper[4895]: I1206 07:20:06.581135 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gv8vt" Dec 06 07:20:07 crc kubenswrapper[4895]: I1206 07:20:07.080433 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tr76" Dec 06 07:20:07 crc kubenswrapper[4895]: I1206 07:20:07.152750 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-gzb86" Dec 06 07:20:07 crc kubenswrapper[4895]: I1206 07:20:07.984851 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6cv59" Dec 06 07:20:08 crc kubenswrapper[4895]: I1206 07:20:08.216709 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f522sgf" Dec 06 07:20:09 crc kubenswrapper[4895]: I1206 07:20:09.727180 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" event={"ID":"d5cafedb-1052-4cd3-9212-3f642e07c18d","Type":"ContainerStarted","Data":"331e759a5a3af2b4f05c9e6df36a334bd58f9118261e5b243d82b509a1e40e6f"} Dec 06 07:20:09 crc kubenswrapper[4895]: I1206 07:20:09.744295 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czmdp" podStartSLOduration=3.284529251 podStartE2EDuration="1m23.74426915s" podCreationTimestamp="2025-12-06 07:18:46 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.494971149 +0000 UTC m=+1290.896360019" lastFinishedPulling="2025-12-06 07:20:08.954711048 +0000 UTC m=+1371.356099918" observedRunningTime="2025-12-06 07:20:09.743502629 +0000 UTC m=+1372.144891489" watchObservedRunningTime="2025-12-06 07:20:09.74426915 +0000 UTC m=+1372.145658020" Dec 06 07:20:10 crc kubenswrapper[4895]: I1206 07:20:10.738431 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" event={"ID":"88ab7b06-3be3-44a1-acbf-8ba5ced20251","Type":"ContainerStarted","Data":"fb49fff66533fcf628e88175f3fa35ab5fdc9874101987f9611463b24c544069"} Dec 06 07:20:10 crc kubenswrapper[4895]: I1206 07:20:10.739393 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:20:10 crc kubenswrapper[4895]: I1206 07:20:10.757132 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" podStartSLOduration=4.107193291 podStartE2EDuration="1m25.757113139s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.472811368 +0000 UTC m=+1290.874200238" lastFinishedPulling="2025-12-06 07:20:10.122731216 +0000 UTC m=+1372.524120086" observedRunningTime="2025-12-06 07:20:10.755466785 +0000 UTC m=+1373.156855655" watchObservedRunningTime="2025-12-06 07:20:10.757113139 +0000 UTC m=+1373.158502009" Dec 06 07:20:14 crc kubenswrapper[4895]: I1206 07:20:14.826347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" event={"ID":"3e2fc835-9cf2-4e21-a1fc-d76cfafba632","Type":"ContainerStarted","Data":"bb552455cbc14e5f5dd990ebc37ef46b796919affa1f2c39a00ca71cbdb3b024"} Dec 06 07:20:14 crc kubenswrapper[4895]: I1206 07:20:14.827609 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:20:14 crc kubenswrapper[4895]: I1206 07:20:14.829851 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" event={"ID":"d61c0a11-5736-4747-889a-6dd520cbe269","Type":"ContainerStarted","Data":"ec2ea2e375743fe1bdc2c0da1647444bef2f9a78dc81620521e6e45a91fa6a20"} Dec 06 07:20:14 crc kubenswrapper[4895]: I1206 07:20:14.830132 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:20:14 crc kubenswrapper[4895]: I1206 07:20:14.853143 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" podStartSLOduration=7.40869827 podStartE2EDuration="1m29.853119042s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.489963903 +0000 UTC m=+1290.891352773" lastFinishedPulling="2025-12-06 07:20:10.934384675 +0000 UTC m=+1373.335773545" observedRunningTime="2025-12-06 07:20:14.843511773 +0000 UTC m=+1377.244900643" watchObservedRunningTime="2025-12-06 07:20:14.853119042 +0000 UTC m=+1377.254507912" Dec 06 07:20:14 crc kubenswrapper[4895]: I1206 07:20:14.863400 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" podStartSLOduration=4.35131989 podStartE2EDuration="1m29.863381119s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="2025-12-06 07:18:48.449949039 +0000 UTC m=+1290.851337909" lastFinishedPulling="2025-12-06 07:20:13.962010268 +0000 UTC m=+1376.363399138" observedRunningTime="2025-12-06 07:20:14.863233635 +0000 UTC m=+1377.264622515" watchObservedRunningTime="2025-12-06 07:20:14.863381119 +0000 UTC m=+1377.264769989" Dec 06 07:20:16 crc kubenswrapper[4895]: I1206 07:20:16.510841 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbd9g" Dec 06 07:20:26 crc kubenswrapper[4895]: I1206 07:20:26.370390 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2lbh2" Dec 06 07:20:27 crc kubenswrapper[4895]: I1206 07:20:27.099901 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-dq42h" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.445985 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-kwmgn"] Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.447760 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.450809 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.451179 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.451355 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2njsw" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.457903 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.458023 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-kwmgn"] Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.492891 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567c455747-92nnt"] Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.500285 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.506866 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c455747-92nnt"] Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.508028 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.580123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxfc\" (UniqueName: \"kubernetes.io/projected/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-kube-api-access-sgxfc\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.580165 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-dns-svc\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.580198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332535da-f309-484f-bf83-5bba59939235-config\") pod \"dnsmasq-dns-5cd484bb89-kwmgn\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.580217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhjs\" (UniqueName: \"kubernetes.io/projected/332535da-f309-484f-bf83-5bba59939235-kube-api-access-5jhjs\") pod \"dnsmasq-dns-5cd484bb89-kwmgn\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.580347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-config\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.681994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxfc\" (UniqueName: \"kubernetes.io/projected/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-kube-api-access-sgxfc\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.682056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-dns-svc\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.682105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332535da-f309-484f-bf83-5bba59939235-config\") pod \"dnsmasq-dns-5cd484bb89-kwmgn\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.682134 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhjs\" (UniqueName: \"kubernetes.io/projected/332535da-f309-484f-bf83-5bba59939235-kube-api-access-5jhjs\") pod \"dnsmasq-dns-5cd484bb89-kwmgn\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.682172 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-config\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.683729 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-config\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.683778 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332535da-f309-484f-bf83-5bba59939235-config\") pod \"dnsmasq-dns-5cd484bb89-kwmgn\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.684648 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-dns-svc\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.705606 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxfc\" (UniqueName: \"kubernetes.io/projected/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-kube-api-access-sgxfc\") pod \"dnsmasq-dns-567c455747-92nnt\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.707316 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhjs\" (UniqueName: \"kubernetes.io/projected/332535da-f309-484f-bf83-5bba59939235-kube-api-access-5jhjs\") pod \"dnsmasq-dns-5cd484bb89-kwmgn\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.770008 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:20:42 crc kubenswrapper[4895]: I1206 07:20:42.823021 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:20:43 crc kubenswrapper[4895]: I1206 07:20:43.045372 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-kwmgn"] Dec 06 07:20:43 crc kubenswrapper[4895]: I1206 07:20:43.298831 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c455747-92nnt"] Dec 06 07:20:43 crc kubenswrapper[4895]: W1206 07:20:43.302338 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7fd79f_2de0_4f93_8ac4_0a039b6f7b17.slice/crio-e4ffa2a5a78983fa0e9e02d7bd556cf7f1cf75cbb180f6d6ae0de78d56d5d99b WatchSource:0}: Error finding container e4ffa2a5a78983fa0e9e02d7bd556cf7f1cf75cbb180f6d6ae0de78d56d5d99b: Status 404 returned error can't find the container with id e4ffa2a5a78983fa0e9e02d7bd556cf7f1cf75cbb180f6d6ae0de78d56d5d99b Dec 06 07:20:44 crc kubenswrapper[4895]: I1206 07:20:44.062105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-92nnt" event={"ID":"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17","Type":"ContainerStarted","Data":"e4ffa2a5a78983fa0e9e02d7bd556cf7f1cf75cbb180f6d6ae0de78d56d5d99b"} Dec 06 07:20:44 crc kubenswrapper[4895]: I1206 07:20:44.062435 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" event={"ID":"332535da-f309-484f-bf83-5bba59939235","Type":"ContainerStarted","Data":"9fd66cfd9ca69e07269dac7dc971d8cf02f2fe52315e1253fb85f19f947bd674"} Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.260110 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-92nnt"] Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.328028 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-2zhlg"] Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.329697 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.357757 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-2zhlg"] Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.427239 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.427574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g84x\" (UniqueName: \"kubernetes.io/projected/99e640e9-4db4-4798-aaab-67d34ca04a5f-kube-api-access-4g84x\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.427677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-config\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.532063 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g84x\" (UniqueName: \"kubernetes.io/projected/99e640e9-4db4-4798-aaab-67d34ca04a5f-kube-api-access-4g84x\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.532151 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-config\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.532199 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.533427 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.535172 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-config\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.571260 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g84x\" (UniqueName: \"kubernetes.io/projected/99e640e9-4db4-4798-aaab-67d34ca04a5f-kube-api-access-4g84x\") pod \"dnsmasq-dns-bc4b48fc9-2zhlg\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.690367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.721152 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-kwmgn"] Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.807053 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb666b895-jbl7g"] Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.814152 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.844601 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-jbl7g"] Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.940436 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-dns-svc\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.941133 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t882p\" (UniqueName: \"kubernetes.io/projected/734306a5-889f-46d6-a6cf-e8e83a01909b-kube-api-access-t882p\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:45 crc kubenswrapper[4895]: I1206 07:20:45.941267 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-config\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.043343 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-dns-svc\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.043415 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t882p\" (UniqueName: \"kubernetes.io/projected/734306a5-889f-46d6-a6cf-e8e83a01909b-kube-api-access-t882p\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.043456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-config\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.044860 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-config\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.045160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-dns-svc\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.066801 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t882p\" (UniqueName: \"kubernetes.io/projected/734306a5-889f-46d6-a6cf-e8e83a01909b-kube-api-access-t882p\") pod \"dnsmasq-dns-cb666b895-jbl7g\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.148448 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.163290 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-2zhlg"] Dec 06 07:20:46 crc kubenswrapper[4895]: W1206 07:20:46.221673 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99e640e9_4db4_4798_aaab_67d34ca04a5f.slice/crio-89b59e52f6a4b971750fb06eb6cfa4a75c0ecc92ba1ceed19fd37622da5ce94d WatchSource:0}: Error finding container 89b59e52f6a4b971750fb06eb6cfa4a75c0ecc92ba1ceed19fd37622da5ce94d: Status 404 returned error can't find the container with id 89b59e52f6a4b971750fb06eb6cfa4a75c0ecc92ba1ceed19fd37622da5ce94d Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.500008 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-jbl7g"] Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.601331 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.602858 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.605900 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-w9np8" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.606358 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.606410 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.606585 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.606726 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.606836 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.607601 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.631155 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.658136 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2wx\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-kube-api-access-tr2wx\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.658791 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fa39160-bfb2-49ae-b2ca-12c0e5788996-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.659928 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.660055 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.660143 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.660328 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.661228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.661501 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.661617 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fa39160-bfb2-49ae-b2ca-12c0e5788996-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.661727 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.661817 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764213 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2wx\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-kube-api-access-tr2wx\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764293 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fa39160-bfb2-49ae-b2ca-12c0e5788996-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764331 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764358 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764383 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764411 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764611 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.764628 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fa39160-bfb2-49ae-b2ca-12c0e5788996-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.765238 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.766145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.768920 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.769260 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.769712 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.770017 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.775277 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fa39160-bfb2-49ae-b2ca-12c0e5788996-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.778112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.778702 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fa39160-bfb2-49ae-b2ca-12c0e5788996-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.790483 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.792150 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2wx\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-kube-api-access-tr2wx\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.857217 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.936630 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.938787 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.945613 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.946534 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.946636 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.949969 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.953785 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vl67g" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.954065 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.955240 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.960979 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.962565 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.991583 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.991659 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.991684 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.991707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.998784 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.998993 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.999148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2ks\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-kube-api-access-zd2ks\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.999195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.999264 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.999360 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:46 crc kubenswrapper[4895]: I1206 07:20:46.999412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101483 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd2ks\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-kube-api-access-zd2ks\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101600 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101630 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101699 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101715 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101729 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.101762 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.104086 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.105741 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.110463 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.111739 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.111902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.112640 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.131795 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.132049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.132093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.137398 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.138713 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" event={"ID":"99e640e9-4db4-4798-aaab-67d34ca04a5f","Type":"ContainerStarted","Data":"89b59e52f6a4b971750fb06eb6cfa4a75c0ecc92ba1ceed19fd37622da5ce94d"} Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.139262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd2ks\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-kube-api-access-zd2ks\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.149697 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.156529 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" event={"ID":"734306a5-889f-46d6-a6cf-e8e83a01909b","Type":"ContainerStarted","Data":"48987c2e80a9b39efd1c49d4f25ae9782f5450d22887059e812bc20eff326624"} Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.351266 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:20:47 crc kubenswrapper[4895]: I1206 07:20:47.638010 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.000677 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.105265 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.107404 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.111041 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.111227 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.111983 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.113964 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bd26w" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.118263 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.143901 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.183236 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e963d73b-d3f2-4c70-8dbd-687b3fc1962d","Type":"ContainerStarted","Data":"2b011e6c246ba4d129721d6e7f5a32b485b5cc59be1e5282f80fad17c8855132"} Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.194448 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8fa39160-bfb2-49ae-b2ca-12c0e5788996","Type":"ContainerStarted","Data":"3e4d540ae4e537163c24343f17c06d066bffd73daa70d6bb742d75a4b127b246"} Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.238930 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.238977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lksz9\" (UniqueName: \"kubernetes.io/projected/4abb614a-de81-4c59-8c5b-27e6761f93c9-kube-api-access-lksz9\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.239004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.239027 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-default\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.239059 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.239102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.239125 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.239169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-kolla-config\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341368 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-kolla-config\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341545 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lksz9\" (UniqueName: \"kubernetes.io/projected/4abb614a-de81-4c59-8c5b-27e6761f93c9-kube-api-access-lksz9\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341578 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341611 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-default\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341663 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.341678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.342951 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.343581 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.343978 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-kolla-config\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.344130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.344646 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-default\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.359634 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.364553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.397484 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.409952 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lksz9\" (UniqueName: \"kubernetes.io/projected/4abb614a-de81-4c59-8c5b-27e6761f93c9-kube-api-access-lksz9\") pod \"openstack-galera-0\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " pod="openstack/openstack-galera-0" Dec 06 07:20:48 crc kubenswrapper[4895]: I1206 07:20:48.447651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.150386 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.265223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4abb614a-de81-4c59-8c5b-27e6761f93c9","Type":"ContainerStarted","Data":"c2603757c5c38237952c060bc1ee8fb4b69347282ed65082ce00f1d2840856f5"} Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.344615 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.347221 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.353170 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.353460 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.353904 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pr2gm" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.354375 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.360172 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.450953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451000 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451054 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczxs\" (UniqueName: \"kubernetes.io/projected/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kube-api-access-xczxs\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451077 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451180 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451200 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.451396 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.511013 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.512578 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.517639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.517714 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.517884 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nsjts" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.552700 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.553386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.553505 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.553579 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-config-data\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.553607 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.553694 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.553832 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.554118 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.554154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.555051 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.555526 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556099 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556142 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczxs\" (UniqueName: \"kubernetes.io/projected/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kube-api-access-xczxs\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556219 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kolla-config\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556248 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.556302 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zll96\" (UniqueName: \"kubernetes.io/projected/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kube-api-access-zll96\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.560402 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.578023 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.589253 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczxs\" (UniqueName: \"kubernetes.io/projected/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kube-api-access-xczxs\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.605356 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.631544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.662273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.662346 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-config-data\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.662414 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kolla-config\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.662440 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.662461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zll96\" (UniqueName: \"kubernetes.io/projected/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kube-api-access-zll96\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.666426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-config-data\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.668441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kolla-config\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.695112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.695523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.699384 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.724038 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zll96\" (UniqueName: \"kubernetes.io/projected/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kube-api-access-zll96\") pod \"memcached-0\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " pod="openstack/memcached-0" Dec 06 07:20:49 crc kubenswrapper[4895]: I1206 07:20:49.848896 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 07:20:50 crc kubenswrapper[4895]: I1206 07:20:50.296598 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 07:20:50 crc kubenswrapper[4895]: W1206 07:20:50.329004 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31a6771e_d46b_42cc_bbca_9d2ddbf24bb5.slice/crio-621698424d5fe4a179660d3c7fcacb49d9f383e8e276df690c9ad31e17e8fa8d WatchSource:0}: Error finding container 621698424d5fe4a179660d3c7fcacb49d9f383e8e276df690c9ad31e17e8fa8d: Status 404 returned error can't find the container with id 621698424d5fe4a179660d3c7fcacb49d9f383e8e276df690c9ad31e17e8fa8d Dec 06 07:20:50 crc kubenswrapper[4895]: I1206 07:20:50.588214 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.369534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5","Type":"ContainerStarted","Data":"621698424d5fe4a179660d3c7fcacb49d9f383e8e276df690c9ad31e17e8fa8d"} Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.378642 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6","Type":"ContainerStarted","Data":"afecd2b271bf0233ba1afba101d50f61eef3adfd809474ed6c74434cf5d0edd5"} Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.535152 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.537290 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.541255 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4l4lv" Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.565554 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.609284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q82w\" (UniqueName: \"kubernetes.io/projected/0be2f95c-5bc7-4080-8396-382e4d3bd7da-kube-api-access-2q82w\") pod \"kube-state-metrics-0\" (UID: \"0be2f95c-5bc7-4080-8396-382e4d3bd7da\") " pod="openstack/kube-state-metrics-0" Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.710498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q82w\" (UniqueName: \"kubernetes.io/projected/0be2f95c-5bc7-4080-8396-382e4d3bd7da-kube-api-access-2q82w\") pod \"kube-state-metrics-0\" (UID: \"0be2f95c-5bc7-4080-8396-382e4d3bd7da\") " pod="openstack/kube-state-metrics-0" Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.740189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q82w\" (UniqueName: \"kubernetes.io/projected/0be2f95c-5bc7-4080-8396-382e4d3bd7da-kube-api-access-2q82w\") pod \"kube-state-metrics-0\" (UID: \"0be2f95c-5bc7-4080-8396-382e4d3bd7da\") " pod="openstack/kube-state-metrics-0" Dec 06 07:20:51 crc kubenswrapper[4895]: I1206 07:20:51.887255 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:20:52 crc kubenswrapper[4895]: I1206 07:20:52.682048 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:20:52 crc kubenswrapper[4895]: W1206 07:20:52.792341 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be2f95c_5bc7_4080_8396_382e4d3bd7da.slice/crio-ff5e8cfb94fa46356d4e381b21cc25c0971edd4d8d9847505fd7ddf866c7d7a7 WatchSource:0}: Error finding container ff5e8cfb94fa46356d4e381b21cc25c0971edd4d8d9847505fd7ddf866c7d7a7: Status 404 returned error can't find the container with id ff5e8cfb94fa46356d4e381b21cc25c0971edd4d8d9847505fd7ddf866c7d7a7 Dec 06 07:20:53 crc kubenswrapper[4895]: I1206 07:20:53.547914 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be2f95c-5bc7-4080-8396-382e4d3bd7da","Type":"ContainerStarted","Data":"ff5e8cfb94fa46356d4e381b21cc25c0971edd4d8d9847505fd7ddf866c7d7a7"} Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.167639 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.172247 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.179638 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.179876 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.180005 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.180139 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.180281 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-xxlsc" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.212657 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-config\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8w7\" (UniqueName: \"kubernetes.io/projected/ec28d57e-8ecf-4415-b18f-69bfa0514187-kube-api-access-5g8w7\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254527 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254559 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254580 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.254600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357290 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357851 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.357880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-config\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.358359 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8w7\" (UniqueName: \"kubernetes.io/projected/ec28d57e-8ecf-4415-b18f-69bfa0514187-kube-api-access-5g8w7\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.358860 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.359008 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.360362 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.360820 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-config\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.366997 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.367894 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.374062 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.397675 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8w7\" (UniqueName: \"kubernetes.io/projected/ec28d57e-8ecf-4415-b18f-69bfa0514187-kube-api-access-5g8w7\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.405999 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cwrlp"] Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.407218 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.411516 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.419050 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.419513 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.419644 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6d2ds" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.458716 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qnbdj"] Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.461263 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-ovn-controller-tls-certs\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467257 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-log-ovn\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467287 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81e2c836-79af-46e7-8be8-a9b0ffdab060-scripts\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run-ovn\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-combined-ca-bundle\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467375 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.467446 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89q6\" (UniqueName: \"kubernetes.io/projected/81e2c836-79af-46e7-8be8-a9b0ffdab060-kube-api-access-s89q6\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.481274 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cwrlp"] Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.499713 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qnbdj"] Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.513017 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569008 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-run\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569073 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-ovn-controller-tls-certs\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-log-ovn\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81e2c836-79af-46e7-8be8-a9b0ffdab060-scripts\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-combined-ca-bundle\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vctn\" (UniqueName: \"kubernetes.io/projected/588e7e7b-f1fb-4e68-846a-04c6a23bec39-kube-api-access-8vctn\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89q6\" (UniqueName: \"kubernetes.io/projected/81e2c836-79af-46e7-8be8-a9b0ffdab060-kube-api-access-s89q6\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.569309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-etc-ovs\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.570571 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run-ovn\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.570643 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-lib\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.570684 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.570744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/588e7e7b-f1fb-4e68-846a-04c6a23bec39-scripts\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.570770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-log\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.571189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-log-ovn\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.575599 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81e2c836-79af-46e7-8be8-a9b0ffdab060-scripts\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.582196 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-combined-ca-bundle\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.582566 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-ovn-controller-tls-certs\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.588142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run-ovn\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.588274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.597618 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89q6\" (UniqueName: \"kubernetes.io/projected/81e2c836-79af-46e7-8be8-a9b0ffdab060-kube-api-access-s89q6\") pod \"ovn-controller-cwrlp\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.693584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vctn\" (UniqueName: \"kubernetes.io/projected/588e7e7b-f1fb-4e68-846a-04c6a23bec39-kube-api-access-8vctn\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.693703 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-etc-ovs\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.693774 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-lib\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.693827 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/588e7e7b-f1fb-4e68-846a-04c6a23bec39-scripts\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.693856 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-log\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.693900 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-run\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.694059 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-run\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.694244 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-lib\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.694370 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-etc-ovs\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.694456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-log\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.703972 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/588e7e7b-f1fb-4e68-846a-04c6a23bec39-scripts\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.718298 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vctn\" (UniqueName: \"kubernetes.io/projected/588e7e7b-f1fb-4e68-846a-04c6a23bec39-kube-api-access-8vctn\") pod \"ovn-controller-ovs-qnbdj\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.800096 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp" Dec 06 07:20:56 crc kubenswrapper[4895]: I1206 07:20:56.816634 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.526005 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8zlq"] Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.532779 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.544322 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8zlq"] Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.613002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-utilities\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.613063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcnb\" (UniqueName: \"kubernetes.io/projected/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-kube-api-access-tlcnb\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.613134 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-catalog-content\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.714569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-catalog-content\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.714674 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-utilities\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.714703 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlcnb\" (UniqueName: \"kubernetes.io/projected/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-kube-api-access-tlcnb\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.715574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-catalog-content\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.715808 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-utilities\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.732623 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlcnb\" (UniqueName: \"kubernetes.io/projected/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-kube-api-access-tlcnb\") pod \"redhat-marketplace-v8zlq\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:57 crc kubenswrapper[4895]: I1206 07:20:57.872658 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.016816 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.020700 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.023320 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.023406 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.023850 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.024178 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d489j" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.034131 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145026 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnthg\" (UniqueName: \"kubernetes.io/projected/a617d6b4-721c-4087-bc16-70bcb58b9c69-kube-api-access-fnthg\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145385 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-config\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145446 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.145507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247343 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-config\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247385 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247418 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247459 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247512 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnthg\" (UniqueName: \"kubernetes.io/projected/a617d6b4-721c-4087-bc16-70bcb58b9c69-kube-api-access-fnthg\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247564 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247588 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.247810 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.248918 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.249653 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.250400 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-config\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.256399 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.258720 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.264536 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.274289 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnthg\" (UniqueName: \"kubernetes.io/projected/a617d6b4-721c-4087-bc16-70bcb58b9c69-kube-api-access-fnthg\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.285657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " pod="openstack/ovsdbserver-sb-0" Dec 06 07:20:59 crc kubenswrapper[4895]: I1206 07:20:59.352858 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 07:21:12 crc kubenswrapper[4895]: E1206 07:21:12.927975 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a" Dec 06 07:21:12 crc kubenswrapper[4895]: E1206 07:21:12.929058 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xczxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(31a6771e-d46b-42cc-bbca-9d2ddbf24bb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:12 crc kubenswrapper[4895]: E1206 07:21:12.930326 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" Dec 06 07:21:12 crc kubenswrapper[4895]: E1206 07:21:12.949277 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a" Dec 06 07:21:12 crc kubenswrapper[4895]: E1206 07:21:12.949491 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lksz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(4abb614a-de81-4c59-8c5b-27e6761f93c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:12 crc kubenswrapper[4895]: E1206 07:21:12.950675 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" Dec 06 07:21:13 crc kubenswrapper[4895]: E1206 07:21:13.788051 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" Dec 06 07:21:13 crc kubenswrapper[4895]: E1206 07:21:13.788128 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a\\\"\"" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" Dec 06 07:21:14 crc kubenswrapper[4895]: E1206 07:21:14.134831 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d" Dec 06 07:21:14 crc kubenswrapper[4895]: E1206 07:21:14.135163 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr2wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(8fa39160-bfb2-49ae-b2ca-12c0e5788996): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:14 crc kubenswrapper[4895]: E1206 07:21:14.136365 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" Dec 06 07:21:14 crc kubenswrapper[4895]: E1206 07:21:14.796685 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" Dec 06 07:21:20 crc kubenswrapper[4895]: E1206 07:21:20.914036 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108" Dec 06 07:21:20 crc kubenswrapper[4895]: E1206 07:21:20.914755 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n564h5bbhbfhc9h5f7h5f9h588h5fbh687h7ch5c5h5f5h649h58dh68fh575h7bh5bfh9chcch657h7fh56h5ffh587h5cch696h556hbh548h56ch58q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zll96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(fba8bc40-d348-4f8f-aeb6-aa2e46d908d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:20 crc kubenswrapper[4895]: E1206 07:21:20.916021 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" Dec 06 07:21:20 crc kubenswrapper[4895]: E1206 07:21:20.927023 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d" Dec 06 07:21:20 crc kubenswrapper[4895]: E1206 07:21:20.927252 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd2ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e963d73b-d3f2-4c70-8dbd-687b3fc1962d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:20 crc kubenswrapper[4895]: E1206 07:21:20.928510 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.834446 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.834847 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4g84x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bc4b48fc9-2zhlg_openstack(99e640e9-4db4-4798-aaab-67d34ca04a5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.836013 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" podUID="99e640e9-4db4-4798-aaab-67d34ca04a5f" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.860615 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.860811 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t882p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-cb666b895-jbl7g_openstack(734306a5-889f-46d6-a6cf-e8e83a01909b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.864651 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" podUID="734306a5-889f-46d6-a6cf-e8e83a01909b" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.904226 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108\\\"\"" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.904860 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.904927 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792\\\"\"" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" podUID="99e640e9-4db4-4798-aaab-67d34ca04a5f" Dec 06 07:21:21 crc kubenswrapper[4895]: E1206 07:21:21.906350 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792\\\"\"" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" podUID="734306a5-889f-46d6-a6cf-e8e83a01909b" Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.221411 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cwrlp"] Dec 06 07:21:22 crc kubenswrapper[4895]: W1206 07:21:22.228575 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81e2c836_79af_46e7_8be8_a9b0ffdab060.slice/crio-7caeda06fed1ebef38d062bd202da4cb174b8733061b87620d3f54ff5648b873 WatchSource:0}: Error finding container 7caeda06fed1ebef38d062bd202da4cb174b8733061b87620d3f54ff5648b873: Status 404 returned error can't find the container with id 7caeda06fed1ebef38d062bd202da4cb174b8733061b87620d3f54ff5648b873 Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.231456 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:21:22 crc kubenswrapper[4895]: E1206 07:21:22.322601 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 06 07:21:22 crc kubenswrapper[4895]: E1206 07:21:22.323555 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgxfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-567c455747-92nnt_openstack(6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:22 crc kubenswrapper[4895]: E1206 07:21:22.324848 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-567c455747-92nnt" podUID="6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17" Dec 06 07:21:22 crc kubenswrapper[4895]: E1206 07:21:22.354698 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 06 07:21:22 crc kubenswrapper[4895]: E1206 07:21:22.354927 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jhjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5cd484bb89-kwmgn_openstack(332535da-f309-484f-bf83-5bba59939235): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:21:22 crc kubenswrapper[4895]: E1206 07:21:22.356348 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" podUID="332535da-f309-484f-bf83-5bba59939235" Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.576240 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.601317 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8zlq"] Dec 06 07:21:22 crc kubenswrapper[4895]: W1206 07:21:22.626683 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c9b12cc_003a_4eb7_b13c_9e310e7b3587.slice/crio-33de42ced3b04ec4c81f3c0cb929c6a4df54fa181e13f73777f930fe7bc18ee8 WatchSource:0}: Error finding container 33de42ced3b04ec4c81f3c0cb929c6a4df54fa181e13f73777f930fe7bc18ee8: Status 404 returned error can't find the container with id 33de42ced3b04ec4c81f3c0cb929c6a4df54fa181e13f73777f930fe7bc18ee8 Dec 06 07:21:22 crc kubenswrapper[4895]: W1206 07:21:22.633832 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda617d6b4_721c_4087_bc16_70bcb58b9c69.slice/crio-4397b88d2b64bc2761866ba342fa4bfe20c8e256a4f354bc51bf1798cb2ed16b WatchSource:0}: Error finding container 4397b88d2b64bc2761866ba342fa4bfe20c8e256a4f354bc51bf1798cb2ed16b: Status 404 returned error can't find the container with id 4397b88d2b64bc2761866ba342fa4bfe20c8e256a4f354bc51bf1798cb2ed16b Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.728235 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qnbdj"] Dec 06 07:21:22 crc kubenswrapper[4895]: W1206 07:21:22.866753 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod588e7e7b_f1fb_4e68_846a_04c6a23bec39.slice/crio-d3669cae0eb9b06d41cf7b5e39ddf41b4ba898dfcaf5121ad003bb69b6722906 WatchSource:0}: Error finding container d3669cae0eb9b06d41cf7b5e39ddf41b4ba898dfcaf5121ad003bb69b6722906: Status 404 returned error can't find the container with id d3669cae0eb9b06d41cf7b5e39ddf41b4ba898dfcaf5121ad003bb69b6722906 Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.907131 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp" event={"ID":"81e2c836-79af-46e7-8be8-a9b0ffdab060","Type":"ContainerStarted","Data":"7caeda06fed1ebef38d062bd202da4cb174b8733061b87620d3f54ff5648b873"} Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.908657 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8zlq" event={"ID":"5c9b12cc-003a-4eb7-b13c-9e310e7b3587","Type":"ContainerStarted","Data":"33de42ced3b04ec4c81f3c0cb929c6a4df54fa181e13f73777f930fe7bc18ee8"} Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.910721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a617d6b4-721c-4087-bc16-70bcb58b9c69","Type":"ContainerStarted","Data":"4397b88d2b64bc2761866ba342fa4bfe20c8e256a4f354bc51bf1798cb2ed16b"} Dec 06 07:21:22 crc kubenswrapper[4895]: I1206 07:21:22.912157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerStarted","Data":"d3669cae0eb9b06d41cf7b5e39ddf41b4ba898dfcaf5121ad003bb69b6722906"} Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.475050 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.500660 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.576507 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgxfc\" (UniqueName: \"kubernetes.io/projected/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-kube-api-access-sgxfc\") pod \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.576618 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-dns-svc\") pod \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.576766 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jhjs\" (UniqueName: \"kubernetes.io/projected/332535da-f309-484f-bf83-5bba59939235-kube-api-access-5jhjs\") pod \"332535da-f309-484f-bf83-5bba59939235\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.576963 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-config\") pod \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\" (UID: \"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17\") " Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.576988 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332535da-f309-484f-bf83-5bba59939235-config\") pod \"332535da-f309-484f-bf83-5bba59939235\" (UID: \"332535da-f309-484f-bf83-5bba59939235\") " Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.577427 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17" (UID: "6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.577564 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-config" (OuterVolumeSpecName: "config") pod "6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17" (UID: "6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.578012 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332535da-f309-484f-bf83-5bba59939235-config" (OuterVolumeSpecName: "config") pod "332535da-f309-484f-bf83-5bba59939235" (UID: "332535da-f309-484f-bf83-5bba59939235"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.578058 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.578074 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.595568 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332535da-f309-484f-bf83-5bba59939235-kube-api-access-5jhjs" (OuterVolumeSpecName: "kube-api-access-5jhjs") pod "332535da-f309-484f-bf83-5bba59939235" (UID: "332535da-f309-484f-bf83-5bba59939235"). InnerVolumeSpecName "kube-api-access-5jhjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.599298 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-kube-api-access-sgxfc" (OuterVolumeSpecName: "kube-api-access-sgxfc") pod "6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17" (UID: "6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17"). InnerVolumeSpecName "kube-api-access-sgxfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.666462 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.679417 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jhjs\" (UniqueName: \"kubernetes.io/projected/332535da-f309-484f-bf83-5bba59939235-kube-api-access-5jhjs\") on node \"crc\" DevicePath \"\"" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.679455 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332535da-f309-484f-bf83-5bba59939235-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.679489 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgxfc\" (UniqueName: \"kubernetes.io/projected/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17-kube-api-access-sgxfc\") on node \"crc\" DevicePath \"\"" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.923957 4895 generic.go:334] "Generic (PLEG): container finished" podID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerID="0362091c183b18e143e0f980adc1e675f28484668f4e5c944aa104b681c32000" exitCode=0 Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.924053 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8zlq" event={"ID":"5c9b12cc-003a-4eb7-b13c-9e310e7b3587","Type":"ContainerDied","Data":"0362091c183b18e143e0f980adc1e675f28484668f4e5c944aa104b681c32000"} Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.926070 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-92nnt" event={"ID":"6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17","Type":"ContainerDied","Data":"e4ffa2a5a78983fa0e9e02d7bd556cf7f1cf75cbb180f6d6ae0de78d56d5d99b"} Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.926206 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-92nnt" Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.952766 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" event={"ID":"332535da-f309-484f-bf83-5bba59939235","Type":"ContainerDied","Data":"9fd66cfd9ca69e07269dac7dc971d8cf02f2fe52315e1253fb85f19f947bd674"} Dec 06 07:21:23 crc kubenswrapper[4895]: I1206 07:21:23.952862 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-kwmgn" Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.000799 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-92nnt"] Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.009775 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-567c455747-92nnt"] Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.032356 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-kwmgn"] Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.040579 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-kwmgn"] Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.065633 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332535da-f309-484f-bf83-5bba59939235" path="/var/lib/kubelet/pods/332535da-f309-484f-bf83-5bba59939235/volumes" Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.066109 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17" path="/var/lib/kubelet/pods/6c7fd79f-2de0-4f93-8ac4-0a039b6f7b17/volumes" Dec 06 07:21:24 crc kubenswrapper[4895]: W1206 07:21:24.077219 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec28d57e_8ecf_4415_b18f_69bfa0514187.slice/crio-594536aae5c0a83a981308fc9927279c916f8b6760d5058163af37a69c11e9d5 WatchSource:0}: Error finding container 594536aae5c0a83a981308fc9927279c916f8b6760d5058163af37a69c11e9d5: Status 404 returned error can't find the container with id 594536aae5c0a83a981308fc9927279c916f8b6760d5058163af37a69c11e9d5 Dec 06 07:21:24 crc kubenswrapper[4895]: I1206 07:21:24.962114 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec28d57e-8ecf-4415-b18f-69bfa0514187","Type":"ContainerStarted","Data":"594536aae5c0a83a981308fc9927279c916f8b6760d5058163af37a69c11e9d5"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.074986 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec28d57e-8ecf-4415-b18f-69bfa0514187","Type":"ContainerStarted","Data":"214631b61aad2658bf3fd72e9c59668fc147ff6e83cf726ca8f32206cfcc972a"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.078971 4895 generic.go:334] "Generic (PLEG): container finished" podID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerID="250d7b8ff11407089b4da6523ff53810b8d0ae52221f94509678cf3767a8a85d" exitCode=0 Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.079071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerDied","Data":"250d7b8ff11407089b4da6523ff53810b8d0ae52221f94509678cf3767a8a85d"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.085408 4895 generic.go:334] "Generic (PLEG): container finished" podID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerID="d0d16ec43f6e195a9b2aefef6386759a0c575eccfccbd1003840320eef59eddb" exitCode=0 Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.085522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8zlq" event={"ID":"5c9b12cc-003a-4eb7-b13c-9e310e7b3587","Type":"ContainerDied","Data":"d0d16ec43f6e195a9b2aefef6386759a0c575eccfccbd1003840320eef59eddb"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.089110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a617d6b4-721c-4087-bc16-70bcb58b9c69","Type":"ContainerStarted","Data":"84e285fd54592923b82baf0ba0638e2645a70b6ae64c423fef3c15fbd472c99c"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.091696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be2f95c-5bc7-4080-8396-382e4d3bd7da","Type":"ContainerStarted","Data":"f7fc0eeeac61074a45e8a57b78362927f6a3286a4c8d6c19aa3d157afad047b4"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.092432 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.093652 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp" event={"ID":"81e2c836-79af-46e7-8be8-a9b0ffdab060","Type":"ContainerStarted","Data":"edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.094144 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cwrlp" Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.095624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4abb614a-de81-4c59-8c5b-27e6761f93c9","Type":"ContainerStarted","Data":"c628e12ea50228621f1e41f4485c674a0036ba8ca8c24cb7cfcef246a700dd15"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.098118 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5","Type":"ContainerStarted","Data":"b92b460ccf694f6a4184f0124b88195172ac8c58d597a8666b73801a8c04c66e"} Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.171180 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cwrlp" podStartSLOduration=28.278044159 podStartE2EDuration="38.171152956s" podCreationTimestamp="2025-12-06 07:20:56 +0000 UTC" firstStartedPulling="2025-12-06 07:21:22.231133077 +0000 UTC m=+1444.632521947" lastFinishedPulling="2025-12-06 07:21:32.124241834 +0000 UTC m=+1454.525630744" observedRunningTime="2025-12-06 07:21:34.154701566 +0000 UTC m=+1456.556090436" watchObservedRunningTime="2025-12-06 07:21:34.171152956 +0000 UTC m=+1456.572541836" Dec 06 07:21:34 crc kubenswrapper[4895]: I1206 07:21:34.242831 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.930555263 podStartE2EDuration="43.242811513s" podCreationTimestamp="2025-12-06 07:20:51 +0000 UTC" firstStartedPulling="2025-12-06 07:20:52.811974233 +0000 UTC m=+1415.213363093" lastFinishedPulling="2025-12-06 07:21:32.124230473 +0000 UTC m=+1454.525619343" observedRunningTime="2025-12-06 07:21:34.235614271 +0000 UTC m=+1456.637003141" watchObservedRunningTime="2025-12-06 07:21:34.242811513 +0000 UTC m=+1456.644200383" Dec 06 07:21:36 crc kubenswrapper[4895]: I1206 07:21:36.123290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8fa39160-bfb2-49ae-b2ca-12c0e5788996","Type":"ContainerStarted","Data":"a69a5c75210b542e25ce1c72e591c0e062baf33da7df98918d86673278bac167"} Dec 06 07:21:36 crc kubenswrapper[4895]: I1206 07:21:36.126525 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerStarted","Data":"6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be"} Dec 06 07:21:36 crc kubenswrapper[4895]: I1206 07:21:36.126573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerStarted","Data":"84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863"} Dec 06 07:21:36 crc kubenswrapper[4895]: I1206 07:21:36.126922 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:21:36 crc kubenswrapper[4895]: I1206 07:21:36.816743 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:21:37 crc kubenswrapper[4895]: I1206 07:21:37.073071 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qnbdj" podStartSLOduration=32.901587016 podStartE2EDuration="41.073051972s" podCreationTimestamp="2025-12-06 07:20:56 +0000 UTC" firstStartedPulling="2025-12-06 07:21:22.870950145 +0000 UTC m=+1445.272339015" lastFinishedPulling="2025-12-06 07:21:31.042415101 +0000 UTC m=+1453.443803971" observedRunningTime="2025-12-06 07:21:36.17143914 +0000 UTC m=+1458.572828010" watchObservedRunningTime="2025-12-06 07:21:37.073051972 +0000 UTC m=+1459.474440842" Dec 06 07:21:38 crc kubenswrapper[4895]: I1206 07:21:38.145968 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8zlq" event={"ID":"5c9b12cc-003a-4eb7-b13c-9e310e7b3587","Type":"ContainerStarted","Data":"22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635"} Dec 06 07:21:38 crc kubenswrapper[4895]: I1206 07:21:38.175642 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8zlq" podStartSLOduration=27.969825868 podStartE2EDuration="41.17561441s" podCreationTimestamp="2025-12-06 07:20:57 +0000 UTC" firstStartedPulling="2025-12-06 07:21:24.068638997 +0000 UTC m=+1446.470027867" lastFinishedPulling="2025-12-06 07:21:37.274427539 +0000 UTC m=+1459.675816409" observedRunningTime="2025-12-06 07:21:38.172945968 +0000 UTC m=+1460.574334838" watchObservedRunningTime="2025-12-06 07:21:38.17561441 +0000 UTC m=+1460.577003280" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.066959 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mpfpb"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.068610 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.075716 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.085667 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mpfpb"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.214813 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.214903 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkzwp\" (UniqueName: \"kubernetes.io/projected/d54fb90e-29d7-4df9-b09f-bd992972dc88-kube-api-access-jkzwp\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.215157 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovs-rundir\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.215224 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-combined-ca-bundle\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.215417 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovn-rundir\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.215497 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54fb90e-29d7-4df9-b09f-bd992972dc88-config\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.227226 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-2zhlg"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.269881 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-bcp56"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.271463 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.275941 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.287058 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-bcp56"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovn-rundir\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54fb90e-29d7-4df9-b09f-bd992972dc88-config\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317695 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkzwp\" (UniqueName: \"kubernetes.io/projected/d54fb90e-29d7-4df9-b09f-bd992972dc88-kube-api-access-jkzwp\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317801 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovs-rundir\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317833 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-combined-ca-bundle\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317892 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovn-rundir\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.317980 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovs-rundir\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.319017 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54fb90e-29d7-4df9-b09f-bd992972dc88-config\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.335061 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.335654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-combined-ca-bundle\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.349688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkzwp\" (UniqueName: \"kubernetes.io/projected/d54fb90e-29d7-4df9-b09f-bd992972dc88-kube-api-access-jkzwp\") pod \"ovn-controller-metrics-mpfpb\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.399738 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.414093 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-jbl7g"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.419812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-dns-svc\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.419915 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lwxw\" (UniqueName: \"kubernetes.io/projected/9502c36e-91ed-4d1e-9c95-220a6c3669ef-kube-api-access-8lwxw\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.420241 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-config\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.420640 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.446847 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-t5x8j"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.448206 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.451654 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.472258 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-t5x8j"] Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.523025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.523122 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-dns-svc\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.523215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lwxw\" (UniqueName: \"kubernetes.io/projected/9502c36e-91ed-4d1e-9c95-220a6c3669ef-kube-api-access-8lwxw\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.523271 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-config\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.524189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.524659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-config\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.526002 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-dns-svc\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.547685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lwxw\" (UniqueName: \"kubernetes.io/projected/9502c36e-91ed-4d1e-9c95-220a6c3669ef-kube-api-access-8lwxw\") pod \"dnsmasq-dns-6c67bcdbf5-bcp56\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.593672 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.625843 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-sb\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.626521 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rnt\" (UniqueName: \"kubernetes.io/projected/73b833aa-beae-480b-9299-c9ad31acafd7-kube-api-access-t2rnt\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.626566 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-nb\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.626635 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-config\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.626698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-dns-svc\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.727816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-config\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.727896 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-dns-svc\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.728007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-sb\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.728041 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2rnt\" (UniqueName: \"kubernetes.io/projected/73b833aa-beae-480b-9299-c9ad31acafd7-kube-api-access-t2rnt\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.728067 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-nb\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.729456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-dns-svc\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.729454 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-nb\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.729823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-config\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.730012 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-sb\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:40 crc kubenswrapper[4895]: I1206 07:21:40.767448 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2rnt\" (UniqueName: \"kubernetes.io/projected/73b833aa-beae-480b-9299-c9ad31acafd7-kube-api-access-t2rnt\") pod \"dnsmasq-dns-984c76dd7-t5x8j\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:41 crc kubenswrapper[4895]: I1206 07:21:41.066084 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:21:41 crc kubenswrapper[4895]: I1206 07:21:41.892901 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 07:21:42 crc kubenswrapper[4895]: I1206 07:21:42.182169 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e963d73b-d3f2-4c70-8dbd-687b3fc1962d","Type":"ContainerStarted","Data":"4bd51d355b0c80b5ee327f7a9d32abed17e794deedf336792232c641ae56041e"} Dec 06 07:21:47 crc kubenswrapper[4895]: I1206 07:21:47.872853 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:21:47 crc kubenswrapper[4895]: I1206 07:21:47.873267 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:21:47 crc kubenswrapper[4895]: I1206 07:21:47.921983 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:21:48 crc kubenswrapper[4895]: I1206 07:21:48.279710 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:21:48 crc kubenswrapper[4895]: I1206 07:21:48.338948 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8zlq"] Dec 06 07:21:50 crc kubenswrapper[4895]: I1206 07:21:50.251345 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8zlq" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="registry-server" containerID="cri-o://22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635" gracePeriod=2 Dec 06 07:21:52 crc kubenswrapper[4895]: I1206 07:21:52.268764 4895 generic.go:334] "Generic (PLEG): container finished" podID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerID="22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635" exitCode=0 Dec 06 07:21:52 crc kubenswrapper[4895]: I1206 07:21:52.270063 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8zlq" event={"ID":"5c9b12cc-003a-4eb7-b13c-9e310e7b3587","Type":"ContainerDied","Data":"22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635"} Dec 06 07:21:57 crc kubenswrapper[4895]: E1206 07:21:57.874171 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635 is running failed: container process not found" containerID="22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:21:57 crc kubenswrapper[4895]: E1206 07:21:57.875108 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635 is running failed: container process not found" containerID="22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:21:57 crc kubenswrapper[4895]: E1206 07:21:57.875537 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635 is running failed: container process not found" containerID="22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:21:57 crc kubenswrapper[4895]: E1206 07:21:57.875584 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-v8zlq" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="registry-server" Dec 06 07:22:02 crc kubenswrapper[4895]: E1206 07:22:02.102614 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108" Dec 06 07:22:02 crc kubenswrapper[4895]: E1206 07:22:02.104669 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n564h5bbhbfhc9h5f7h5f9h588h5fbh687h7ch5c5h5f5h649h58dh68fh575h7bh5bfh9chcch657h7fh56h5ffh587h5cch696h556hbh548h56ch58q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zll96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(fba8bc40-d348-4f8f-aeb6-aa2e46d908d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:22:02 crc kubenswrapper[4895]: E1206 07:22:02.106119 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" Dec 06 07:22:02 crc kubenswrapper[4895]: E1206 07:22:02.106643 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 06 07:22:02 crc kubenswrapper[4895]: E1206 07:22:02.106837 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t882p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-cb666b895-jbl7g_openstack(734306a5-889f-46d6-a6cf-e8e83a01909b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:22:02 crc kubenswrapper[4895]: E1206 07:22:02.107967 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" podUID="734306a5-889f-46d6-a6cf-e8e83a01909b" Dec 06 07:22:02 crc kubenswrapper[4895]: I1206 07:22:02.530002 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mpfpb"] Dec 06 07:22:05 crc kubenswrapper[4895]: W1206 07:22:05.168167 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd54fb90e_29d7_4df9_b09f_bd992972dc88.slice/crio-7dea9abf6c160a0fa3db2198607eebe06d2c62c98eab5afa76f348efad0c356b WatchSource:0}: Error finding container 7dea9abf6c160a0fa3db2198607eebe06d2c62c98eab5afa76f348efad0c356b: Status 404 returned error can't find the container with id 7dea9abf6c160a0fa3db2198607eebe06d2c62c98eab5afa76f348efad0c356b Dec 06 07:22:05 crc kubenswrapper[4895]: E1206 07:22:05.171246 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 06 07:22:05 crc kubenswrapper[4895]: E1206 07:22:05.171744 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4g84x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bc4b48fc9-2zhlg_openstack(99e640e9-4db4-4798-aaab-67d34ca04a5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:22:05 crc kubenswrapper[4895]: E1206 07:22:05.174062 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" podUID="99e640e9-4db4-4798-aaab-67d34ca04a5f" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.262634 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.270966 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.394331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mpfpb" event={"ID":"d54fb90e-29d7-4df9-b09f-bd992972dc88","Type":"ContainerStarted","Data":"7dea9abf6c160a0fa3db2198607eebe06d2c62c98eab5afa76f348efad0c356b"} Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.395324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" event={"ID":"734306a5-889f-46d6-a6cf-e8e83a01909b","Type":"ContainerDied","Data":"48987c2e80a9b39efd1c49d4f25ae9782f5450d22887059e812bc20eff326624"} Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.395446 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-jbl7g" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.401234 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8zlq" event={"ID":"5c9b12cc-003a-4eb7-b13c-9e310e7b3587","Type":"ContainerDied","Data":"33de42ced3b04ec4c81f3c0cb929c6a4df54fa181e13f73777f930fe7bc18ee8"} Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.401327 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8zlq" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.401337 4895 scope.go:117] "RemoveContainer" containerID="22907925c23f950daa15facf713a40ec0d8cb81e475176c73d04722d37a05635" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.405220 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-dns-svc\") pod \"734306a5-889f-46d6-a6cf-e8e83a01909b\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.405426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t882p\" (UniqueName: \"kubernetes.io/projected/734306a5-889f-46d6-a6cf-e8e83a01909b-kube-api-access-t882p\") pod \"734306a5-889f-46d6-a6cf-e8e83a01909b\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.405491 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-config\") pod \"734306a5-889f-46d6-a6cf-e8e83a01909b\" (UID: \"734306a5-889f-46d6-a6cf-e8e83a01909b\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.405599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcnb\" (UniqueName: \"kubernetes.io/projected/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-kube-api-access-tlcnb\") pod \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.405635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-catalog-content\") pod \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.405659 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-utilities\") pod \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\" (UID: \"5c9b12cc-003a-4eb7-b13c-9e310e7b3587\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.407283 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-utilities" (OuterVolumeSpecName: "utilities") pod "5c9b12cc-003a-4eb7-b13c-9e310e7b3587" (UID: "5c9b12cc-003a-4eb7-b13c-9e310e7b3587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.407371 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-config" (OuterVolumeSpecName: "config") pod "734306a5-889f-46d6-a6cf-e8e83a01909b" (UID: "734306a5-889f-46d6-a6cf-e8e83a01909b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.408972 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "734306a5-889f-46d6-a6cf-e8e83a01909b" (UID: "734306a5-889f-46d6-a6cf-e8e83a01909b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.414700 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-kube-api-access-tlcnb" (OuterVolumeSpecName: "kube-api-access-tlcnb") pod "5c9b12cc-003a-4eb7-b13c-9e310e7b3587" (UID: "5c9b12cc-003a-4eb7-b13c-9e310e7b3587"). InnerVolumeSpecName "kube-api-access-tlcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.414929 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734306a5-889f-46d6-a6cf-e8e83a01909b-kube-api-access-t882p" (OuterVolumeSpecName: "kube-api-access-t882p") pod "734306a5-889f-46d6-a6cf-e8e83a01909b" (UID: "734306a5-889f-46d6-a6cf-e8e83a01909b"). InnerVolumeSpecName "kube-api-access-t882p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.437060 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c9b12cc-003a-4eb7-b13c-9e310e7b3587" (UID: "5c9b12cc-003a-4eb7-b13c-9e310e7b3587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.507896 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.507934 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t882p\" (UniqueName: \"kubernetes.io/projected/734306a5-889f-46d6-a6cf-e8e83a01909b-kube-api-access-t882p\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.507946 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734306a5-889f-46d6-a6cf-e8e83a01909b-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.507955 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlcnb\" (UniqueName: \"kubernetes.io/projected/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-kube-api-access-tlcnb\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.507967 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.507977 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b12cc-003a-4eb7-b13c-9e310e7b3587-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.512699 4895 scope.go:117] "RemoveContainer" containerID="d0d16ec43f6e195a9b2aefef6386759a0c575eccfccbd1003840320eef59eddb" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.542349 4895 scope.go:117] "RemoveContainer" containerID="0362091c183b18e143e0f980adc1e675f28484668f4e5c944aa104b681c32000" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.688203 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-t5x8j"] Dec 06 07:22:05 crc kubenswrapper[4895]: E1206 07:22:05.709724 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7" Dec 06 07:22:05 crc kubenswrapper[4895]: E1206 07:22:05.710300 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:nf6h58fh55h75h5d9h64dh6dh76h57h54h5ffh79h94hcdh565h5dhb8h675h5c7h64dh67ch5b5hfbh675h5d6h648h689hch656h5c9h55ch75q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnthg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(a617d6b4-721c-4087-bc16-70bcb58b9c69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:22:05 crc kubenswrapper[4895]: E1206 07:22:05.711793 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.752544 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8zlq"] Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.766175 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8zlq"] Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.803336 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.839446 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-jbl7g"] Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.852905 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-jbl7g"] Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.859993 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-bcp56"] Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.925699 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g84x\" (UniqueName: \"kubernetes.io/projected/99e640e9-4db4-4798-aaab-67d34ca04a5f-kube-api-access-4g84x\") pod \"99e640e9-4db4-4798-aaab-67d34ca04a5f\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.926062 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-config\") pod \"99e640e9-4db4-4798-aaab-67d34ca04a5f\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.926336 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-dns-svc\") pod \"99e640e9-4db4-4798-aaab-67d34ca04a5f\" (UID: \"99e640e9-4db4-4798-aaab-67d34ca04a5f\") " Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.927415 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99e640e9-4db4-4798-aaab-67d34ca04a5f" (UID: "99e640e9-4db4-4798-aaab-67d34ca04a5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.927434 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-config" (OuterVolumeSpecName: "config") pod "99e640e9-4db4-4798-aaab-67d34ca04a5f" (UID: "99e640e9-4db4-4798-aaab-67d34ca04a5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:05 crc kubenswrapper[4895]: I1206 07:22:05.930075 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e640e9-4db4-4798-aaab-67d34ca04a5f-kube-api-access-4g84x" (OuterVolumeSpecName: "kube-api-access-4g84x") pod "99e640e9-4db4-4798-aaab-67d34ca04a5f" (UID: "99e640e9-4db4-4798-aaab-67d34ca04a5f"). InnerVolumeSpecName "kube-api-access-4g84x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.028689 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.028741 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e640e9-4db4-4798-aaab-67d34ca04a5f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.028753 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g84x\" (UniqueName: \"kubernetes.io/projected/99e640e9-4db4-4798-aaab-67d34ca04a5f-kube-api-access-4g84x\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.066978 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" path="/var/lib/kubelet/pods/5c9b12cc-003a-4eb7-b13c-9e310e7b3587/volumes" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.067957 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734306a5-889f-46d6-a6cf-e8e83a01909b" path="/var/lib/kubelet/pods/734306a5-889f-46d6-a6cf-e8e83a01909b/volumes" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.411235 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" event={"ID":"99e640e9-4db4-4798-aaab-67d34ca04a5f","Type":"ContainerDied","Data":"89b59e52f6a4b971750fb06eb6cfa4a75c0ecc92ba1ceed19fd37622da5ce94d"} Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.411255 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-2zhlg" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.412393 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" event={"ID":"9502c36e-91ed-4d1e-9c95-220a6c3669ef","Type":"ContainerStarted","Data":"6c1d0e3c512ca8608fcf7b6f5bc1671417fcc2881c8156b3328901eb6a8227f8"} Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.416828 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" event={"ID":"73b833aa-beae-480b-9299-c9ad31acafd7","Type":"ContainerStarted","Data":"d4b75b73795fad501c33e3a0151cb059417e352753c2e044d74c9c0495fc6c04"} Dec 06 07:22:06 crc kubenswrapper[4895]: E1206 07:22:06.419331 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.496309 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-2zhlg"] Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.507046 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-2zhlg"] Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.842898 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" probeResult="failure" output=< Dec 06 07:22:06 crc kubenswrapper[4895]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 07:22:06 crc kubenswrapper[4895]: > Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.867985 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:22:06 crc kubenswrapper[4895]: I1206 07:22:06.906726 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:22:07 crc kubenswrapper[4895]: E1206 07:22:07.877421 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7" Dec 06 07:22:07 crc kubenswrapper[4895]: E1206 07:22:07.877605 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5c8h66h6fhf5h5d9hf6h566h56h646h675h9dh658h6bh66ch5fch676h5f7h87h57bh678h598hf8h65ch687h5b8h68h5dch58fh564hf4h57h665q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5g8w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(ec28d57e-8ecf-4415-b18f-69bfa0514187): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:22:07 crc kubenswrapper[4895]: E1206 07:22:07.878743 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.064949 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99e640e9-4db4-4798-aaab-67d34ca04a5f" path="/var/lib/kubelet/pods/99e640e9-4db4-4798-aaab-67d34ca04a5f/volumes" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.354143 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 06 07:22:08 crc kubenswrapper[4895]: E1206 07:22:08.361444 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.407799 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.449973 4895 generic.go:334] "Generic (PLEG): container finished" podID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerID="a69a5c75210b542e25ce1c72e591c0e062baf33da7df98918d86673278bac167" exitCode=0 Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.450106 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8fa39160-bfb2-49ae-b2ca-12c0e5788996","Type":"ContainerDied","Data":"a69a5c75210b542e25ce1c72e591c0e062baf33da7df98918d86673278bac167"} Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.450455 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 06 07:22:08 crc kubenswrapper[4895]: E1206 07:22:08.452380 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" Dec 06 07:22:08 crc kubenswrapper[4895]: E1206 07:22:08.452425 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.498608 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.522236 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 06 07:22:08 crc kubenswrapper[4895]: I1206 07:22:08.598883 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 06 07:22:09 crc kubenswrapper[4895]: I1206 07:22:09.457647 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 06 07:22:09 crc kubenswrapper[4895]: E1206 07:22:09.459112 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" Dec 06 07:22:09 crc kubenswrapper[4895]: E1206 07:22:09.459327 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" Dec 06 07:22:09 crc kubenswrapper[4895]: I1206 07:22:09.516039 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 06 07:22:10 crc kubenswrapper[4895]: I1206 07:22:10.475868 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8fa39160-bfb2-49ae-b2ca-12c0e5788996","Type":"ContainerStarted","Data":"c65a6f976e5fa9af5eaf596cfeeff15bdcc75ba1010c01bacc5174bb022b9e1c"} Dec 06 07:22:10 crc kubenswrapper[4895]: I1206 07:22:10.521141 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.616915478 podStartE2EDuration="1m25.521116819s" podCreationTimestamp="2025-12-06 07:20:45 +0000 UTC" firstStartedPulling="2025-12-06 07:20:47.656705369 +0000 UTC m=+1410.058094239" lastFinishedPulling="2025-12-06 07:21:33.56090671 +0000 UTC m=+1455.962295580" observedRunningTime="2025-12-06 07:22:10.516692611 +0000 UTC m=+1492.918081481" watchObservedRunningTime="2025-12-06 07:22:10.521116819 +0000 UTC m=+1492.922505699" Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.484418 4895 generic.go:334] "Generic (PLEG): container finished" podID="73b833aa-beae-480b-9299-c9ad31acafd7" containerID="a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94" exitCode=0 Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.484636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" event={"ID":"73b833aa-beae-480b-9299-c9ad31acafd7","Type":"ContainerDied","Data":"a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94"} Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.488228 4895 generic.go:334] "Generic (PLEG): container finished" podID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerID="4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd" exitCode=0 Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.488279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" event={"ID":"9502c36e-91ed-4d1e-9c95-220a6c3669ef","Type":"ContainerDied","Data":"4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd"} Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.491682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec28d57e-8ecf-4415-b18f-69bfa0514187","Type":"ContainerStarted","Data":"156f7c175f2582a5ecac54004c4a02a79357a47a8e30e1f567ebf0c9892821d7"} Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.502656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mpfpb" event={"ID":"d54fb90e-29d7-4df9-b09f-bd992972dc88","Type":"ContainerStarted","Data":"b6a7604abefe1444b026209f2779ebe71e0863f38495e270864aa2d5eaa61e31"} Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.534375 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a617d6b4-721c-4087-bc16-70bcb58b9c69","Type":"ContainerStarted","Data":"5e79ef9fb346fc367bb36aff8fa7a7e5ecd4ed177678c52a5893ae9529169abe"} Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.653846 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=67.174704752 podStartE2EDuration="1m16.653787742s" podCreationTimestamp="2025-12-06 07:20:55 +0000 UTC" firstStartedPulling="2025-12-06 07:21:24.082354464 +0000 UTC m=+1446.483743334" lastFinishedPulling="2025-12-06 07:21:33.561437454 +0000 UTC m=+1455.962826324" observedRunningTime="2025-12-06 07:22:11.593298194 +0000 UTC m=+1493.994687074" watchObservedRunningTime="2025-12-06 07:22:11.653787742 +0000 UTC m=+1494.055176622" Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.687188 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mpfpb" podStartSLOduration=26.463609427 podStartE2EDuration="31.687165605s" podCreationTimestamp="2025-12-06 07:21:40 +0000 UTC" firstStartedPulling="2025-12-06 07:22:05.18277096 +0000 UTC m=+1487.584159830" lastFinishedPulling="2025-12-06 07:22:10.406327138 +0000 UTC m=+1492.807716008" observedRunningTime="2025-12-06 07:22:11.65142854 +0000 UTC m=+1494.052817430" watchObservedRunningTime="2025-12-06 07:22:11.687165605 +0000 UTC m=+1494.088554475" Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.749385 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=65.260859618 podStartE2EDuration="1m14.74936565s" podCreationTimestamp="2025-12-06 07:20:57 +0000 UTC" firstStartedPulling="2025-12-06 07:21:22.635711761 +0000 UTC m=+1445.037100631" lastFinishedPulling="2025-12-06 07:21:32.124217793 +0000 UTC m=+1454.525606663" observedRunningTime="2025-12-06 07:22:11.736808934 +0000 UTC m=+1494.138197824" watchObservedRunningTime="2025-12-06 07:22:11.74936565 +0000 UTC m=+1494.150754520" Dec 06 07:22:11 crc kubenswrapper[4895]: I1206 07:22:11.883661 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" probeResult="failure" output=< Dec 06 07:22:11 crc kubenswrapper[4895]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 07:22:11 crc kubenswrapper[4895]: > Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.174617 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 06 07:22:12 crc kubenswrapper[4895]: E1206 07:22:12.174973 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="registry-server" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.174986 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="registry-server" Dec 06 07:22:12 crc kubenswrapper[4895]: E1206 07:22:12.175000 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="extract-content" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.175006 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="extract-content" Dec 06 07:22:12 crc kubenswrapper[4895]: E1206 07:22:12.175017 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="extract-utilities" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.175023 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="extract-utilities" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.175182 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9b12cc-003a-4eb7-b13c-9e310e7b3587" containerName="registry-server" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.176105 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.185016 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.187199 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.187572 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.187764 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ntjsd" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.199556 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.245458 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.245530 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-config\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.245578 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.245707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.245932 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.246082 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bc4\" (UniqueName: \"kubernetes.io/projected/cccbd9be-50fa-413b-bb47-1af68ecdda2d-kube-api-access-l6bc4\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.246161 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-scripts\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.267024 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cwrlp-config-fq48g"] Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.268328 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.277971 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.298585 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cwrlp-config-fq48g"] Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run-ovn\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348259 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-log-ovn\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348321 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-additional-scripts\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348351 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bc4\" (UniqueName: \"kubernetes.io/projected/cccbd9be-50fa-413b-bb47-1af68ecdda2d-kube-api-access-l6bc4\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348422 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-scripts\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348438 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-scripts\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348501 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348522 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlpgd\" (UniqueName: \"kubernetes.io/projected/77d0506d-6222-4204-946d-00cce98ac212-kube-api-access-tlpgd\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-config\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.348568 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.349035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.350510 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-scripts\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.351009 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-config\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.355219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.355719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.356256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.371557 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bc4\" (UniqueName: \"kubernetes.io/projected/cccbd9be-50fa-413b-bb47-1af68ecdda2d-kube-api-access-l6bc4\") pod \"ovn-northd-0\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.449830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-log-ovn\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.449914 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-additional-scripts\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-scripts\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlpgd\" (UniqueName: \"kubernetes.io/projected/77d0506d-6222-4204-946d-00cce98ac212-kube-api-access-tlpgd\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450130 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run-ovn\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450587 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-log-ovn\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450606 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run-ovn\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.450606 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.451274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-additional-scripts\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.452353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-scripts\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.470765 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlpgd\" (UniqueName: \"kubernetes.io/projected/77d0506d-6222-4204-946d-00cce98ac212-kube-api-access-tlpgd\") pod \"ovn-controller-cwrlp-config-fq48g\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.503074 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.546456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" event={"ID":"73b833aa-beae-480b-9299-c9ad31acafd7","Type":"ContainerStarted","Data":"19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d"} Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.546924 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.554322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" event={"ID":"9502c36e-91ed-4d1e-9c95-220a6c3669ef","Type":"ContainerStarted","Data":"edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13"} Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.554390 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.573952 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" podStartSLOduration=27.889765771 podStartE2EDuration="32.573932999s" podCreationTimestamp="2025-12-06 07:21:40 +0000 UTC" firstStartedPulling="2025-12-06 07:22:05.723742262 +0000 UTC m=+1488.125131132" lastFinishedPulling="2025-12-06 07:22:10.40790949 +0000 UTC m=+1492.809298360" observedRunningTime="2025-12-06 07:22:12.566895161 +0000 UTC m=+1494.968284051" watchObservedRunningTime="2025-12-06 07:22:12.573932999 +0000 UTC m=+1494.975321869" Dec 06 07:22:12 crc kubenswrapper[4895]: I1206 07:22:12.583004 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.033922 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" podStartSLOduration=28.514113055 podStartE2EDuration="33.033886515s" podCreationTimestamp="2025-12-06 07:21:40 +0000 UTC" firstStartedPulling="2025-12-06 07:22:05.886468986 +0000 UTC m=+1488.287857856" lastFinishedPulling="2025-12-06 07:22:10.406242446 +0000 UTC m=+1492.807631316" observedRunningTime="2025-12-06 07:22:12.606953813 +0000 UTC m=+1495.008342693" watchObservedRunningTime="2025-12-06 07:22:13.033886515 +0000 UTC m=+1495.435275385" Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.039205 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 07:22:13 crc kubenswrapper[4895]: W1206 07:22:13.042093 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcccbd9be_50fa_413b_bb47_1af68ecdda2d.slice/crio-ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d WatchSource:0}: Error finding container ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d: Status 404 returned error can't find the container with id ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.162222 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cwrlp-config-fq48g"] Dec 06 07:22:13 crc kubenswrapper[4895]: W1206 07:22:13.165189 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77d0506d_6222_4204_946d_00cce98ac212.slice/crio-04d9f6a605c318ea7a9a7730c6db4d9d31042475ecaebadc32dcbd2100ac1935 WatchSource:0}: Error finding container 04d9f6a605c318ea7a9a7730c6db4d9d31042475ecaebadc32dcbd2100ac1935: Status 404 returned error can't find the container with id 04d9f6a605c318ea7a9a7730c6db4d9d31042475ecaebadc32dcbd2100ac1935 Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.567891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cccbd9be-50fa-413b-bb47-1af68ecdda2d","Type":"ContainerStarted","Data":"ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d"} Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.571795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp-config-fq48g" event={"ID":"77d0506d-6222-4204-946d-00cce98ac212","Type":"ContainerStarted","Data":"04d9f6a605c318ea7a9a7730c6db4d9d31042475ecaebadc32dcbd2100ac1935"} Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.574532 4895 generic.go:334] "Generic (PLEG): container finished" podID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerID="b92b460ccf694f6a4184f0124b88195172ac8c58d597a8666b73801a8c04c66e" exitCode=0 Dec 06 07:22:13 crc kubenswrapper[4895]: I1206 07:22:13.574636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5","Type":"ContainerDied","Data":"b92b460ccf694f6a4184f0124b88195172ac8c58d597a8666b73801a8c04c66e"} Dec 06 07:22:14 crc kubenswrapper[4895]: I1206 07:22:14.583659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp-config-fq48g" event={"ID":"77d0506d-6222-4204-946d-00cce98ac212","Type":"ContainerStarted","Data":"901a82362dfab6727871904d5dfc6172f99ee2967f942712fc164c74972b6553"} Dec 06 07:22:14 crc kubenswrapper[4895]: I1206 07:22:14.588671 4895 generic.go:334] "Generic (PLEG): container finished" podID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerID="4bd51d355b0c80b5ee327f7a9d32abed17e794deedf336792232c641ae56041e" exitCode=0 Dec 06 07:22:14 crc kubenswrapper[4895]: I1206 07:22:14.588760 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e963d73b-d3f2-4c70-8dbd-687b3fc1962d","Type":"ContainerDied","Data":"4bd51d355b0c80b5ee327f7a9d32abed17e794deedf336792232c641ae56041e"} Dec 06 07:22:14 crc kubenswrapper[4895]: I1206 07:22:14.592791 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5","Type":"ContainerStarted","Data":"f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602"} Dec 06 07:22:14 crc kubenswrapper[4895]: I1206 07:22:14.632441 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cwrlp-config-fq48g" podStartSLOduration=2.6324212620000003 podStartE2EDuration="2.632421262s" podCreationTimestamp="2025-12-06 07:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:14.621031118 +0000 UTC m=+1497.022420008" watchObservedRunningTime="2025-12-06 07:22:14.632421262 +0000 UTC m=+1497.033810132" Dec 06 07:22:14 crc kubenswrapper[4895]: I1206 07:22:14.680426 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=44.91465996 podStartE2EDuration="1m26.680402776s" podCreationTimestamp="2025-12-06 07:20:48 +0000 UTC" firstStartedPulling="2025-12-06 07:20:50.357965003 +0000 UTC m=+1412.759353873" lastFinishedPulling="2025-12-06 07:21:32.123707819 +0000 UTC m=+1454.525096689" observedRunningTime="2025-12-06 07:22:14.679208433 +0000 UTC m=+1497.080597303" watchObservedRunningTime="2025-12-06 07:22:14.680402776 +0000 UTC m=+1497.081791646" Dec 06 07:22:15 crc kubenswrapper[4895]: E1206 07:22:15.052885 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108\\\"\"" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" Dec 06 07:22:15 crc kubenswrapper[4895]: I1206 07:22:15.604066 4895 generic.go:334] "Generic (PLEG): container finished" podID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerID="c628e12ea50228621f1e41f4485c674a0036ba8ca8c24cb7cfcef246a700dd15" exitCode=0 Dec 06 07:22:15 crc kubenswrapper[4895]: I1206 07:22:15.604165 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4abb614a-de81-4c59-8c5b-27e6761f93c9","Type":"ContainerDied","Data":"c628e12ea50228621f1e41f4485c674a0036ba8ca8c24cb7cfcef246a700dd15"} Dec 06 07:22:15 crc kubenswrapper[4895]: I1206 07:22:15.611254 4895 generic.go:334] "Generic (PLEG): container finished" podID="77d0506d-6222-4204-946d-00cce98ac212" containerID="901a82362dfab6727871904d5dfc6172f99ee2967f942712fc164c74972b6553" exitCode=0 Dec 06 07:22:15 crc kubenswrapper[4895]: I1206 07:22:15.611344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp-config-fq48g" event={"ID":"77d0506d-6222-4204-946d-00cce98ac212","Type":"ContainerDied","Data":"901a82362dfab6727871904d5dfc6172f99ee2967f942712fc164c74972b6553"} Dec 06 07:22:16 crc kubenswrapper[4895]: I1206 07:22:16.849040 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cwrlp" Dec 06 07:22:16 crc kubenswrapper[4895]: I1206 07:22:16.952217 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:16 crc kubenswrapper[4895]: I1206 07:22:16.963898 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075353 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run-ovn\") pod \"77d0506d-6222-4204-946d-00cce98ac212\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075432 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "77d0506d-6222-4204-946d-00cce98ac212" (UID: "77d0506d-6222-4204-946d-00cce98ac212"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-log-ovn\") pod \"77d0506d-6222-4204-946d-00cce98ac212\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlpgd\" (UniqueName: \"kubernetes.io/projected/77d0506d-6222-4204-946d-00cce98ac212-kube-api-access-tlpgd\") pod \"77d0506d-6222-4204-946d-00cce98ac212\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-additional-scripts\") pod \"77d0506d-6222-4204-946d-00cce98ac212\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run\") pod \"77d0506d-6222-4204-946d-00cce98ac212\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.075721 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-scripts\") pod \"77d0506d-6222-4204-946d-00cce98ac212\" (UID: \"77d0506d-6222-4204-946d-00cce98ac212\") " Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.076057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run" (OuterVolumeSpecName: "var-run") pod "77d0506d-6222-4204-946d-00cce98ac212" (UID: "77d0506d-6222-4204-946d-00cce98ac212"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.076082 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "77d0506d-6222-4204-946d-00cce98ac212" (UID: "77d0506d-6222-4204-946d-00cce98ac212"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.076318 4895 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.076337 4895 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.076349 4895 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77d0506d-6222-4204-946d-00cce98ac212-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.077128 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-scripts" (OuterVolumeSpecName: "scripts") pod "77d0506d-6222-4204-946d-00cce98ac212" (UID: "77d0506d-6222-4204-946d-00cce98ac212"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.077174 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "77d0506d-6222-4204-946d-00cce98ac212" (UID: "77d0506d-6222-4204-946d-00cce98ac212"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.084697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d0506d-6222-4204-946d-00cce98ac212-kube-api-access-tlpgd" (OuterVolumeSpecName: "kube-api-access-tlpgd") pod "77d0506d-6222-4204-946d-00cce98ac212" (UID: "77d0506d-6222-4204-946d-00cce98ac212"). InnerVolumeSpecName "kube-api-access-tlpgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.178271 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlpgd\" (UniqueName: \"kubernetes.io/projected/77d0506d-6222-4204-946d-00cce98ac212-kube-api-access-tlpgd\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.178322 4895 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.178336 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d0506d-6222-4204-946d-00cce98ac212-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.626319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp-config-fq48g" event={"ID":"77d0506d-6222-4204-946d-00cce98ac212","Type":"ContainerDied","Data":"04d9f6a605c318ea7a9a7730c6db4d9d31042475ecaebadc32dcbd2100ac1935"} Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.626371 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d9f6a605c318ea7a9a7730c6db4d9d31042475ecaebadc32dcbd2100ac1935" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.626370 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp-config-fq48g" Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.742615 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cwrlp-config-fq48g"] Dec 06 07:22:17 crc kubenswrapper[4895]: I1206 07:22:17.748830 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cwrlp-config-fq48g"] Dec 06 07:22:18 crc kubenswrapper[4895]: I1206 07:22:18.060180 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77d0506d-6222-4204-946d-00cce98ac212" path="/var/lib/kubelet/pods/77d0506d-6222-4204-946d-00cce98ac212/volumes" Dec 06 07:22:18 crc kubenswrapper[4895]: I1206 07:22:18.634979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e963d73b-d3f2-4c70-8dbd-687b3fc1962d","Type":"ContainerStarted","Data":"86a69fa460c2d02239e4fcca0e82a4cac5dc6968a1de1b5762253021bb623d96"} Dec 06 07:22:18 crc kubenswrapper[4895]: I1206 07:22:18.637936 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4abb614a-de81-4c59-8c5b-27e6761f93c9","Type":"ContainerStarted","Data":"ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811"} Dec 06 07:22:19 crc kubenswrapper[4895]: I1206 07:22:19.652002 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:22:19 crc kubenswrapper[4895]: I1206 07:22:19.695793 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 06 07:22:19 crc kubenswrapper[4895]: I1206 07:22:19.696810 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 06 07:22:19 crc kubenswrapper[4895]: I1206 07:22:19.732167 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=48.332295495 podStartE2EDuration="1m32.732142449s" podCreationTimestamp="2025-12-06 07:20:47 +0000 UTC" firstStartedPulling="2025-12-06 07:20:49.162683319 +0000 UTC m=+1411.564072189" lastFinishedPulling="2025-12-06 07:21:33.562530273 +0000 UTC m=+1455.963919143" observedRunningTime="2025-12-06 07:22:19.69667576 +0000 UTC m=+1502.098064840" watchObservedRunningTime="2025-12-06 07:22:19.732142449 +0000 UTC m=+1502.133531329" Dec 06 07:22:19 crc kubenswrapper[4895]: I1206 07:22:19.738776 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371942.116024 podStartE2EDuration="1m34.738751186s" podCreationTimestamp="2025-12-06 07:20:45 +0000 UTC" firstStartedPulling="2025-12-06 07:20:48.068023971 +0000 UTC m=+1410.469412841" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:19.726809896 +0000 UTC m=+1502.128198766" watchObservedRunningTime="2025-12-06 07:22:19.738751186 +0000 UTC m=+1502.140140056" Dec 06 07:22:20 crc kubenswrapper[4895]: I1206 07:22:20.596660 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:22:20 crc kubenswrapper[4895]: I1206 07:22:20.665296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cccbd9be-50fa-413b-bb47-1af68ecdda2d","Type":"ContainerStarted","Data":"2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632"} Dec 06 07:22:20 crc kubenswrapper[4895]: I1206 07:22:20.665344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cccbd9be-50fa-413b-bb47-1af68ecdda2d","Type":"ContainerStarted","Data":"c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49"} Dec 06 07:22:20 crc kubenswrapper[4895]: I1206 07:22:20.665896 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 06 07:22:20 crc kubenswrapper[4895]: I1206 07:22:20.694916 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.0760644089999998 podStartE2EDuration="8.694874506s" podCreationTimestamp="2025-12-06 07:22:12 +0000 UTC" firstStartedPulling="2025-12-06 07:22:13.044804308 +0000 UTC m=+1495.446193178" lastFinishedPulling="2025-12-06 07:22:19.663614415 +0000 UTC m=+1502.065003275" observedRunningTime="2025-12-06 07:22:20.687533768 +0000 UTC m=+1503.088922648" watchObservedRunningTime="2025-12-06 07:22:20.694874506 +0000 UTC m=+1503.096263376" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.067729 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.139757 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-bcp56"] Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.140065 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerName="dnsmasq-dns" containerID="cri-o://edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13" gracePeriod=10 Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.642947 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.672680 4895 generic.go:334] "Generic (PLEG): container finished" podID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerID="edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13" exitCode=0 Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.672753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" event={"ID":"9502c36e-91ed-4d1e-9c95-220a6c3669ef","Type":"ContainerDied","Data":"edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13"} Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.672811 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" event={"ID":"9502c36e-91ed-4d1e-9c95-220a6c3669ef","Type":"ContainerDied","Data":"6c1d0e3c512ca8608fcf7b6f5bc1671417fcc2881c8156b3328901eb6a8227f8"} Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.672836 4895 scope.go:117] "RemoveContainer" containerID="edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.673932 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-bcp56" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.699079 4895 scope.go:117] "RemoveContainer" containerID="4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.727194 4895 scope.go:117] "RemoveContainer" containerID="edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13" Dec 06 07:22:21 crc kubenswrapper[4895]: E1206 07:22:21.728264 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13\": container with ID starting with edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13 not found: ID does not exist" containerID="edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.728319 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13"} err="failed to get container status \"edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13\": rpc error: code = NotFound desc = could not find container \"edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13\": container with ID starting with edd9f1005230a2ea7440c82cfa2aadd4da98c6fa7552cdf01ed8e7c552eb9a13 not found: ID does not exist" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.728348 4895 scope.go:117] "RemoveContainer" containerID="4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd" Dec 06 07:22:21 crc kubenswrapper[4895]: E1206 07:22:21.729836 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd\": container with ID starting with 4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd not found: ID does not exist" containerID="4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.729875 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd"} err="failed to get container status \"4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd\": rpc error: code = NotFound desc = could not find container \"4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd\": container with ID starting with 4519cd44d3880615c3e0df3b27fba21345f0d0e8828003bf6ddddf66628a8ffd not found: ID does not exist" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.799082 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-ovsdbserver-nb\") pod \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.799592 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-config\") pod \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.799837 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lwxw\" (UniqueName: \"kubernetes.io/projected/9502c36e-91ed-4d1e-9c95-220a6c3669ef-kube-api-access-8lwxw\") pod \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.800000 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-dns-svc\") pod \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\" (UID: \"9502c36e-91ed-4d1e-9c95-220a6c3669ef\") " Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.806059 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9502c36e-91ed-4d1e-9c95-220a6c3669ef-kube-api-access-8lwxw" (OuterVolumeSpecName: "kube-api-access-8lwxw") pod "9502c36e-91ed-4d1e-9c95-220a6c3669ef" (UID: "9502c36e-91ed-4d1e-9c95-220a6c3669ef"). InnerVolumeSpecName "kube-api-access-8lwxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.863119 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-config" (OuterVolumeSpecName: "config") pod "9502c36e-91ed-4d1e-9c95-220a6c3669ef" (UID: "9502c36e-91ed-4d1e-9c95-220a6c3669ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.871936 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9502c36e-91ed-4d1e-9c95-220a6c3669ef" (UID: "9502c36e-91ed-4d1e-9c95-220a6c3669ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.893069 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9502c36e-91ed-4d1e-9c95-220a6c3669ef" (UID: "9502c36e-91ed-4d1e-9c95-220a6c3669ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.903010 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lwxw\" (UniqueName: \"kubernetes.io/projected/9502c36e-91ed-4d1e-9c95-220a6c3669ef-kube-api-access-8lwxw\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.903058 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.903072 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:21 crc kubenswrapper[4895]: I1206 07:22:21.903084 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9502c36e-91ed-4d1e-9c95-220a6c3669ef-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:22 crc kubenswrapper[4895]: I1206 07:22:22.010312 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-bcp56"] Dec 06 07:22:22 crc kubenswrapper[4895]: I1206 07:22:22.021084 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-bcp56"] Dec 06 07:22:22 crc kubenswrapper[4895]: I1206 07:22:22.066327 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" path="/var/lib/kubelet/pods/9502c36e-91ed-4d1e-9c95-220a6c3669ef/volumes" Dec 06 07:22:22 crc kubenswrapper[4895]: I1206 07:22:22.520456 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 06 07:22:22 crc kubenswrapper[4895]: I1206 07:22:22.600405 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 06 07:22:26 crc kubenswrapper[4895]: I1206 07:22:26.965670 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.448287 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.448900 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.735350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6","Type":"ContainerStarted","Data":"9d2fe9419d1d71a0bda4ee42e39fc68443c6fa554ca6ce504dec36d80c892830"} Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.735619 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.757370 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.767300 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.786895445 podStartE2EDuration="1m39.767286703s" podCreationTimestamp="2025-12-06 07:20:49 +0000 UTC" firstStartedPulling="2025-12-06 07:20:50.620073568 +0000 UTC m=+1413.021462438" lastFinishedPulling="2025-12-06 07:22:27.600464826 +0000 UTC m=+1510.001853696" observedRunningTime="2025-12-06 07:22:28.761528929 +0000 UTC m=+1511.162917799" watchObservedRunningTime="2025-12-06 07:22:28.767286703 +0000 UTC m=+1511.168675573" Dec 06 07:22:28 crc kubenswrapper[4895]: I1206 07:22:28.831294 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.548082 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nv6m9"] Dec 06 07:22:29 crc kubenswrapper[4895]: E1206 07:22:29.548597 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerName="init" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.548612 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerName="init" Dec 06 07:22:29 crc kubenswrapper[4895]: E1206 07:22:29.548626 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerName="dnsmasq-dns" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.548634 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerName="dnsmasq-dns" Dec 06 07:22:29 crc kubenswrapper[4895]: E1206 07:22:29.548643 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d0506d-6222-4204-946d-00cce98ac212" containerName="ovn-config" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.548651 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d0506d-6222-4204-946d-00cce98ac212" containerName="ovn-config" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.548881 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d0506d-6222-4204-946d-00cce98ac212" containerName="ovn-config" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.548907 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9502c36e-91ed-4d1e-9c95-220a6c3669ef" containerName="dnsmasq-dns" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.549652 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.557832 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-10a2-account-create-update-4nnhv"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.559485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.561713 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.566716 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-10a2-account-create-update-4nnhv"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.584865 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nv6m9"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.638979 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af04787-363a-4359-b668-61407d87da63-operator-scripts\") pod \"keystone-10a2-account-create-update-4nnhv\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.639073 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzfx4\" (UniqueName: \"kubernetes.io/projected/8af04787-363a-4359-b668-61407d87da63-kube-api-access-xzfx4\") pod \"keystone-10a2-account-create-update-4nnhv\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.639128 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bphbt\" (UniqueName: \"kubernetes.io/projected/693e844e-e267-4ca9-b338-f5c4c709067f-kube-api-access-bphbt\") pod \"keystone-db-create-nv6m9\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.639196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/693e844e-e267-4ca9-b338-f5c4c709067f-operator-scripts\") pod \"keystone-db-create-nv6m9\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.696162 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.696221 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.740322 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af04787-363a-4359-b668-61407d87da63-operator-scripts\") pod \"keystone-10a2-account-create-update-4nnhv\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.740644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzfx4\" (UniqueName: \"kubernetes.io/projected/8af04787-363a-4359-b668-61407d87da63-kube-api-access-xzfx4\") pod \"keystone-10a2-account-create-update-4nnhv\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.740679 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bphbt\" (UniqueName: \"kubernetes.io/projected/693e844e-e267-4ca9-b338-f5c4c709067f-kube-api-access-bphbt\") pod \"keystone-db-create-nv6m9\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.740732 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/693e844e-e267-4ca9-b338-f5c4c709067f-operator-scripts\") pod \"keystone-db-create-nv6m9\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.741224 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af04787-363a-4359-b668-61407d87da63-operator-scripts\") pod \"keystone-10a2-account-create-update-4nnhv\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.741385 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/693e844e-e267-4ca9-b338-f5c4c709067f-operator-scripts\") pod \"keystone-db-create-nv6m9\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.754974 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7zmvs"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.756189 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.764813 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bphbt\" (UniqueName: \"kubernetes.io/projected/693e844e-e267-4ca9-b338-f5c4c709067f-kube-api-access-bphbt\") pod \"keystone-db-create-nv6m9\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.766373 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzfx4\" (UniqueName: \"kubernetes.io/projected/8af04787-363a-4359-b668-61407d87da63-kube-api-access-xzfx4\") pod \"keystone-10a2-account-create-update-4nnhv\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.784430 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7zmvs"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.842908 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e09ed8-5a57-4021-bdf8-f4e260aabac9-operator-scripts\") pod \"placement-db-create-7zmvs\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.843297 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfdf\" (UniqueName: \"kubernetes.io/projected/75e09ed8-5a57-4021-bdf8-f4e260aabac9-kube-api-access-spfdf\") pod \"placement-db-create-7zmvs\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.877846 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-00ad-account-create-update-qswlt"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.879288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.881761 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.883794 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00ad-account-create-update-qswlt"] Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.910670 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.923185 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.945390 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d5931-677d-44a2-8fa9-ab695a62ce1e-operator-scripts\") pod \"placement-00ad-account-create-update-qswlt\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.945907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfdf\" (UniqueName: \"kubernetes.io/projected/75e09ed8-5a57-4021-bdf8-f4e260aabac9-kube-api-access-spfdf\") pod \"placement-db-create-7zmvs\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.946067 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rn8x\" (UniqueName: \"kubernetes.io/projected/946d5931-677d-44a2-8fa9-ab695a62ce1e-kube-api-access-9rn8x\") pod \"placement-00ad-account-create-update-qswlt\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.946216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e09ed8-5a57-4021-bdf8-f4e260aabac9-operator-scripts\") pod \"placement-db-create-7zmvs\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.946995 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e09ed8-5a57-4021-bdf8-f4e260aabac9-operator-scripts\") pod \"placement-db-create-7zmvs\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:29 crc kubenswrapper[4895]: I1206 07:22:29.962082 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfdf\" (UniqueName: \"kubernetes.io/projected/75e09ed8-5a57-4021-bdf8-f4e260aabac9-kube-api-access-spfdf\") pod \"placement-db-create-7zmvs\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.049409 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rn8x\" (UniqueName: \"kubernetes.io/projected/946d5931-677d-44a2-8fa9-ab695a62ce1e-kube-api-access-9rn8x\") pod \"placement-00ad-account-create-update-qswlt\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.049810 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d5931-677d-44a2-8fa9-ab695a62ce1e-operator-scripts\") pod \"placement-00ad-account-create-update-qswlt\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.050563 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d5931-677d-44a2-8fa9-ab695a62ce1e-operator-scripts\") pod \"placement-00ad-account-create-update-qswlt\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.092775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rn8x\" (UniqueName: \"kubernetes.io/projected/946d5931-677d-44a2-8fa9-ab695a62ce1e-kube-api-access-9rn8x\") pod \"placement-00ad-account-create-update-qswlt\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.170252 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.197116 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.384962 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nv6m9"] Dec 06 07:22:30 crc kubenswrapper[4895]: W1206 07:22:30.457603 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8af04787_363a_4359_b668_61407d87da63.slice/crio-579c87dfc22fac471635a5ea85ac9117e6ea9cab4d6299072e1da0ded66c8114 WatchSource:0}: Error finding container 579c87dfc22fac471635a5ea85ac9117e6ea9cab4d6299072e1da0ded66c8114: Status 404 returned error can't find the container with id 579c87dfc22fac471635a5ea85ac9117e6ea9cab4d6299072e1da0ded66c8114 Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.459199 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-10a2-account-create-update-4nnhv"] Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.744319 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00ad-account-create-update-qswlt"] Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.754350 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7zmvs"] Dec 06 07:22:30 crc kubenswrapper[4895]: W1206 07:22:30.762646 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e09ed8_5a57_4021_bdf8_f4e260aabac9.slice/crio-0dbcf87765828dfe113cb6c3ba9b44cd8c923977b6461c72637b939780e6fde7 WatchSource:0}: Error finding container 0dbcf87765828dfe113cb6c3ba9b44cd8c923977b6461c72637b939780e6fde7: Status 404 returned error can't find the container with id 0dbcf87765828dfe113cb6c3ba9b44cd8c923977b6461c72637b939780e6fde7 Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.780466 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv6m9" event={"ID":"693e844e-e267-4ca9-b338-f5c4c709067f","Type":"ContainerStarted","Data":"98b44f0f1e2a073a15d9551f71f5c355998b03a99d9436ed949aa6d7fed91a0d"} Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.780557 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv6m9" event={"ID":"693e844e-e267-4ca9-b338-f5c4c709067f","Type":"ContainerStarted","Data":"10bae739599816230e351bc0b451d10f378d41b2157833fd98b8167b7d898d56"} Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.785994 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-10a2-account-create-update-4nnhv" event={"ID":"8af04787-363a-4359-b668-61407d87da63","Type":"ContainerStarted","Data":"579c87dfc22fac471635a5ea85ac9117e6ea9cab4d6299072e1da0ded66c8114"} Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.824794 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-nv6m9" podStartSLOduration=1.824774608 podStartE2EDuration="1.824774608s" podCreationTimestamp="2025-12-06 07:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:30.817699488 +0000 UTC m=+1513.219088378" watchObservedRunningTime="2025-12-06 07:22:30.824774608 +0000 UTC m=+1513.226163478" Dec 06 07:22:30 crc kubenswrapper[4895]: I1206 07:22:30.857950 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-10a2-account-create-update-4nnhv" podStartSLOduration=1.857924035 podStartE2EDuration="1.857924035s" podCreationTimestamp="2025-12-06 07:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:30.845938125 +0000 UTC m=+1513.247327015" watchObservedRunningTime="2025-12-06 07:22:30.857924035 +0000 UTC m=+1513.259312905" Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.799629 4895 generic.go:334] "Generic (PLEG): container finished" podID="946d5931-677d-44a2-8fa9-ab695a62ce1e" containerID="8acd647173ea9348de51669e8f55bf35c74c189bbfe0a82f8badbb80b5baef39" exitCode=0 Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.799734 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00ad-account-create-update-qswlt" event={"ID":"946d5931-677d-44a2-8fa9-ab695a62ce1e","Type":"ContainerDied","Data":"8acd647173ea9348de51669e8f55bf35c74c189bbfe0a82f8badbb80b5baef39"} Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.800080 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00ad-account-create-update-qswlt" event={"ID":"946d5931-677d-44a2-8fa9-ab695a62ce1e","Type":"ContainerStarted","Data":"aab1eaa2948ae18cac508da61c5c35fd8c10222cd453ac996b935fd1022ba000"} Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.802623 4895 generic.go:334] "Generic (PLEG): container finished" podID="693e844e-e267-4ca9-b338-f5c4c709067f" containerID="98b44f0f1e2a073a15d9551f71f5c355998b03a99d9436ed949aa6d7fed91a0d" exitCode=0 Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.802777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv6m9" event={"ID":"693e844e-e267-4ca9-b338-f5c4c709067f","Type":"ContainerDied","Data":"98b44f0f1e2a073a15d9551f71f5c355998b03a99d9436ed949aa6d7fed91a0d"} Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.804727 4895 generic.go:334] "Generic (PLEG): container finished" podID="75e09ed8-5a57-4021-bdf8-f4e260aabac9" containerID="33d31e9aedcfefe23175b6c6b234217c7c44b103b5a898c80f6cefb0800cc0d3" exitCode=0 Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.804783 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7zmvs" event={"ID":"75e09ed8-5a57-4021-bdf8-f4e260aabac9","Type":"ContainerDied","Data":"33d31e9aedcfefe23175b6c6b234217c7c44b103b5a898c80f6cefb0800cc0d3"} Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.804801 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7zmvs" event={"ID":"75e09ed8-5a57-4021-bdf8-f4e260aabac9","Type":"ContainerStarted","Data":"0dbcf87765828dfe113cb6c3ba9b44cd8c923977b6461c72637b939780e6fde7"} Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.806261 4895 generic.go:334] "Generic (PLEG): container finished" podID="8af04787-363a-4359-b668-61407d87da63" containerID="b6998f94fefd3637ae3fc950603c2fcc0900a3d69d0e7e9061355de8045469a1" exitCode=0 Dec 06 07:22:31 crc kubenswrapper[4895]: I1206 07:22:31.806290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-10a2-account-create-update-4nnhv" event={"ID":"8af04787-363a-4359-b668-61407d87da63","Type":"ContainerDied","Data":"b6998f94fefd3637ae3fc950603c2fcc0900a3d69d0e7e9061355de8045469a1"} Dec 06 07:22:32 crc kubenswrapper[4895]: I1206 07:22:32.562174 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.281789 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.289309 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.364914 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e09ed8-5a57-4021-bdf8-f4e260aabac9-operator-scripts\") pod \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.364972 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spfdf\" (UniqueName: \"kubernetes.io/projected/75e09ed8-5a57-4021-bdf8-f4e260aabac9-kube-api-access-spfdf\") pod \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\" (UID: \"75e09ed8-5a57-4021-bdf8-f4e260aabac9\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.365018 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bphbt\" (UniqueName: \"kubernetes.io/projected/693e844e-e267-4ca9-b338-f5c4c709067f-kube-api-access-bphbt\") pod \"693e844e-e267-4ca9-b338-f5c4c709067f\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.365130 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/693e844e-e267-4ca9-b338-f5c4c709067f-operator-scripts\") pod \"693e844e-e267-4ca9-b338-f5c4c709067f\" (UID: \"693e844e-e267-4ca9-b338-f5c4c709067f\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.365738 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e09ed8-5a57-4021-bdf8-f4e260aabac9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75e09ed8-5a57-4021-bdf8-f4e260aabac9" (UID: "75e09ed8-5a57-4021-bdf8-f4e260aabac9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.366375 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/693e844e-e267-4ca9-b338-f5c4c709067f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "693e844e-e267-4ca9-b338-f5c4c709067f" (UID: "693e844e-e267-4ca9-b338-f5c4c709067f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.372677 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693e844e-e267-4ca9-b338-f5c4c709067f-kube-api-access-bphbt" (OuterVolumeSpecName: "kube-api-access-bphbt") pod "693e844e-e267-4ca9-b338-f5c4c709067f" (UID: "693e844e-e267-4ca9-b338-f5c4c709067f"). InnerVolumeSpecName "kube-api-access-bphbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.372710 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e09ed8-5a57-4021-bdf8-f4e260aabac9-kube-api-access-spfdf" (OuterVolumeSpecName: "kube-api-access-spfdf") pod "75e09ed8-5a57-4021-bdf8-f4e260aabac9" (UID: "75e09ed8-5a57-4021-bdf8-f4e260aabac9"). InnerVolumeSpecName "kube-api-access-spfdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.466548 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e09ed8-5a57-4021-bdf8-f4e260aabac9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.466856 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spfdf\" (UniqueName: \"kubernetes.io/projected/75e09ed8-5a57-4021-bdf8-f4e260aabac9-kube-api-access-spfdf\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.466871 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bphbt\" (UniqueName: \"kubernetes.io/projected/693e844e-e267-4ca9-b338-f5c4c709067f-kube-api-access-bphbt\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.466880 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/693e844e-e267-4ca9-b338-f5c4c709067f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.485339 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.493755 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.567735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rn8x\" (UniqueName: \"kubernetes.io/projected/946d5931-677d-44a2-8fa9-ab695a62ce1e-kube-api-access-9rn8x\") pod \"946d5931-677d-44a2-8fa9-ab695a62ce1e\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.567797 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d5931-677d-44a2-8fa9-ab695a62ce1e-operator-scripts\") pod \"946d5931-677d-44a2-8fa9-ab695a62ce1e\" (UID: \"946d5931-677d-44a2-8fa9-ab695a62ce1e\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.567832 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzfx4\" (UniqueName: \"kubernetes.io/projected/8af04787-363a-4359-b668-61407d87da63-kube-api-access-xzfx4\") pod \"8af04787-363a-4359-b668-61407d87da63\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.567883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af04787-363a-4359-b668-61407d87da63-operator-scripts\") pod \"8af04787-363a-4359-b668-61407d87da63\" (UID: \"8af04787-363a-4359-b668-61407d87da63\") " Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.568260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946d5931-677d-44a2-8fa9-ab695a62ce1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "946d5931-677d-44a2-8fa9-ab695a62ce1e" (UID: "946d5931-677d-44a2-8fa9-ab695a62ce1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.568551 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af04787-363a-4359-b668-61407d87da63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8af04787-363a-4359-b668-61407d87da63" (UID: "8af04787-363a-4359-b668-61407d87da63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.572523 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af04787-363a-4359-b668-61407d87da63-kube-api-access-xzfx4" (OuterVolumeSpecName: "kube-api-access-xzfx4") pod "8af04787-363a-4359-b668-61407d87da63" (UID: "8af04787-363a-4359-b668-61407d87da63"). InnerVolumeSpecName "kube-api-access-xzfx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.572700 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946d5931-677d-44a2-8fa9-ab695a62ce1e-kube-api-access-9rn8x" (OuterVolumeSpecName: "kube-api-access-9rn8x") pod "946d5931-677d-44a2-8fa9-ab695a62ce1e" (UID: "946d5931-677d-44a2-8fa9-ab695a62ce1e"). InnerVolumeSpecName "kube-api-access-9rn8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.670095 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rn8x\" (UniqueName: \"kubernetes.io/projected/946d5931-677d-44a2-8fa9-ab695a62ce1e-kube-api-access-9rn8x\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.670132 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d5931-677d-44a2-8fa9-ab695a62ce1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.670142 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzfx4\" (UniqueName: \"kubernetes.io/projected/8af04787-363a-4359-b668-61407d87da63-kube-api-access-xzfx4\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.670150 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af04787-363a-4359-b668-61407d87da63-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.821648 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7zmvs" event={"ID":"75e09ed8-5a57-4021-bdf8-f4e260aabac9","Type":"ContainerDied","Data":"0dbcf87765828dfe113cb6c3ba9b44cd8c923977b6461c72637b939780e6fde7"} Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.821719 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dbcf87765828dfe113cb6c3ba9b44cd8c923977b6461c72637b939780e6fde7" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.821674 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7zmvs" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.823101 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-10a2-account-create-update-4nnhv" event={"ID":"8af04787-363a-4359-b668-61407d87da63","Type":"ContainerDied","Data":"579c87dfc22fac471635a5ea85ac9117e6ea9cab4d6299072e1da0ded66c8114"} Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.823134 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="579c87dfc22fac471635a5ea85ac9117e6ea9cab4d6299072e1da0ded66c8114" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.823169 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10a2-account-create-update-4nnhv" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.825305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00ad-account-create-update-qswlt" event={"ID":"946d5931-677d-44a2-8fa9-ab695a62ce1e","Type":"ContainerDied","Data":"aab1eaa2948ae18cac508da61c5c35fd8c10222cd453ac996b935fd1022ba000"} Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.825331 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab1eaa2948ae18cac508da61c5c35fd8c10222cd453ac996b935fd1022ba000" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.825374 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00ad-account-create-update-qswlt" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.838413 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv6m9" event={"ID":"693e844e-e267-4ca9-b338-f5c4c709067f","Type":"ContainerDied","Data":"10bae739599816230e351bc0b451d10f378d41b2157833fd98b8167b7d898d56"} Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.838460 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10bae739599816230e351bc0b451d10f378d41b2157833fd98b8167b7d898d56" Dec 06 07:22:33 crc kubenswrapper[4895]: I1206 07:22:33.838542 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv6m9" Dec 06 07:22:34 crc kubenswrapper[4895]: I1206 07:22:34.851942 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.074960 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7l2zn"] Dec 06 07:22:35 crc kubenswrapper[4895]: E1206 07:22:35.075496 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946d5931-677d-44a2-8fa9-ab695a62ce1e" containerName="mariadb-account-create-update" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075508 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="946d5931-677d-44a2-8fa9-ab695a62ce1e" containerName="mariadb-account-create-update" Dec 06 07:22:35 crc kubenswrapper[4895]: E1206 07:22:35.075530 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e09ed8-5a57-4021-bdf8-f4e260aabac9" containerName="mariadb-database-create" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075537 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e09ed8-5a57-4021-bdf8-f4e260aabac9" containerName="mariadb-database-create" Dec 06 07:22:35 crc kubenswrapper[4895]: E1206 07:22:35.075549 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af04787-363a-4359-b668-61407d87da63" containerName="mariadb-account-create-update" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075556 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af04787-363a-4359-b668-61407d87da63" containerName="mariadb-account-create-update" Dec 06 07:22:35 crc kubenswrapper[4895]: E1206 07:22:35.075567 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693e844e-e267-4ca9-b338-f5c4c709067f" containerName="mariadb-database-create" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075574 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="693e844e-e267-4ca9-b338-f5c4c709067f" containerName="mariadb-database-create" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075716 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="946d5931-677d-44a2-8fa9-ab695a62ce1e" containerName="mariadb-account-create-update" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075730 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="693e844e-e267-4ca9-b338-f5c4c709067f" containerName="mariadb-database-create" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075741 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e09ed8-5a57-4021-bdf8-f4e260aabac9" containerName="mariadb-database-create" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.075756 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af04787-363a-4359-b668-61407d87da63" containerName="mariadb-account-create-update" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.076236 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.092014 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7l2zn"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.170181 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vtjtg"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.171765 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.177587 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-55c9-account-create-update-g9g8t"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.178748 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.184701 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.193449 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad446d6d-ed17-4ce4-8cae-0d570aa84483-operator-scripts\") pod \"cinder-db-create-7l2zn\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.193521 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2cpq\" (UniqueName: \"kubernetes.io/projected/ad446d6d-ed17-4ce4-8cae-0d570aa84483-kube-api-access-f2cpq\") pod \"cinder-db-create-7l2zn\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.194894 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vtjtg"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.217333 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-55c9-account-create-update-g9g8t"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.294803 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db70-account-create-update-7v5lv"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.295826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef61367b-1e67-4933-9dad-c02352f97789-operator-scripts\") pod \"glance-55c9-account-create-update-g9g8t\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.295912 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cdfee21-9315-4df0-9ac9-7f02483a05e3-operator-scripts\") pod \"glance-db-create-vtjtg\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.295959 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqs6\" (UniqueName: \"kubernetes.io/projected/ef61367b-1e67-4933-9dad-c02352f97789-kube-api-access-xgqs6\") pod \"glance-55c9-account-create-update-g9g8t\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.296021 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad446d6d-ed17-4ce4-8cae-0d570aa84483-operator-scripts\") pod \"cinder-db-create-7l2zn\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.296060 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2cpq\" (UniqueName: \"kubernetes.io/projected/ad446d6d-ed17-4ce4-8cae-0d570aa84483-kube-api-access-f2cpq\") pod \"cinder-db-create-7l2zn\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.296088 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6x9p\" (UniqueName: \"kubernetes.io/projected/6cdfee21-9315-4df0-9ac9-7f02483a05e3-kube-api-access-z6x9p\") pod \"glance-db-create-vtjtg\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.296120 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.296953 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad446d6d-ed17-4ce4-8cae-0d570aa84483-operator-scripts\") pod \"cinder-db-create-7l2zn\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.298349 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.335553 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db70-account-create-update-7v5lv"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.336137 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2cpq\" (UniqueName: \"kubernetes.io/projected/ad446d6d-ed17-4ce4-8cae-0d570aa84483-kube-api-access-f2cpq\") pod \"cinder-db-create-7l2zn\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.391874 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.397915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef61367b-1e67-4933-9dad-c02352f97789-operator-scripts\") pod \"glance-55c9-account-create-update-g9g8t\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.397990 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cdfee21-9315-4df0-9ac9-7f02483a05e3-operator-scripts\") pod \"glance-db-create-vtjtg\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.398019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c822196a-b31e-4f00-9f88-8749f5394fc2-operator-scripts\") pod \"cinder-db70-account-create-update-7v5lv\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.398070 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqs6\" (UniqueName: \"kubernetes.io/projected/ef61367b-1e67-4933-9dad-c02352f97789-kube-api-access-xgqs6\") pod \"glance-55c9-account-create-update-g9g8t\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.398116 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2bt\" (UniqueName: \"kubernetes.io/projected/c822196a-b31e-4f00-9f88-8749f5394fc2-kube-api-access-vf2bt\") pod \"cinder-db70-account-create-update-7v5lv\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.398143 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6x9p\" (UniqueName: \"kubernetes.io/projected/6cdfee21-9315-4df0-9ac9-7f02483a05e3-kube-api-access-z6x9p\") pod \"glance-db-create-vtjtg\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.399143 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cdfee21-9315-4df0-9ac9-7f02483a05e3-operator-scripts\") pod \"glance-db-create-vtjtg\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.399323 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef61367b-1e67-4933-9dad-c02352f97789-operator-scripts\") pod \"glance-55c9-account-create-update-g9g8t\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.420500 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6x9p\" (UniqueName: \"kubernetes.io/projected/6cdfee21-9315-4df0-9ac9-7f02483a05e3-kube-api-access-z6x9p\") pod \"glance-db-create-vtjtg\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.420702 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqs6\" (UniqueName: \"kubernetes.io/projected/ef61367b-1e67-4933-9dad-c02352f97789-kube-api-access-xgqs6\") pod \"glance-55c9-account-create-update-g9g8t\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.446254 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-86j7g"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.447766 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.453573 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.453884 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.454433 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s6vzn" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.454556 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.468978 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-86j7g"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.484028 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dsfwz"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.485852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.499708 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.500159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkff\" (UniqueName: \"kubernetes.io/projected/cac89f8a-b835-4356-af05-a02cd8f079ea-kube-api-access-xrkff\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.500217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2bt\" (UniqueName: \"kubernetes.io/projected/c822196a-b31e-4f00-9f88-8749f5394fc2-kube-api-access-vf2bt\") pod \"cinder-db70-account-create-update-7v5lv\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.500293 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-combined-ca-bundle\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.500319 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-config-data\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.500569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c822196a-b31e-4f00-9f88-8749f5394fc2-operator-scripts\") pod \"cinder-db70-account-create-update-7v5lv\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.501444 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c822196a-b31e-4f00-9f88-8749f5394fc2-operator-scripts\") pod \"cinder-db70-account-create-update-7v5lv\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.513227 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.516392 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dsfwz"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.529178 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2bt\" (UniqueName: \"kubernetes.io/projected/c822196a-b31e-4f00-9f88-8749f5394fc2-kube-api-access-vf2bt\") pod \"cinder-db70-account-create-update-7v5lv\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.593987 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c074-account-create-update-925t5"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.598765 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.600466 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.602603 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2fc6ab-79ba-4a70-905f-2b3f87437296-operator-scripts\") pod \"neutron-db-create-dsfwz\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.602785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfp9\" (UniqueName: \"kubernetes.io/projected/4b2fc6ab-79ba-4a70-905f-2b3f87437296-kube-api-access-9sfp9\") pod \"neutron-db-create-dsfwz\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.602825 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrkff\" (UniqueName: \"kubernetes.io/projected/cac89f8a-b835-4356-af05-a02cd8f079ea-kube-api-access-xrkff\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.602882 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-combined-ca-bundle\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.602908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-config-data\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.614241 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.616970 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-combined-ca-bundle\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.621579 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrkff\" (UniqueName: \"kubernetes.io/projected/cac89f8a-b835-4356-af05-a02cd8f079ea-kube-api-access-xrkff\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.621780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-config-data\") pod \"keystone-db-sync-86j7g\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.624943 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c074-account-create-update-925t5"] Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.705225 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2fc6ab-79ba-4a70-905f-2b3f87437296-operator-scripts\") pod \"neutron-db-create-dsfwz\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.705317 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8k8h\" (UniqueName: \"kubernetes.io/projected/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-kube-api-access-j8k8h\") pod \"neutron-c074-account-create-update-925t5\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.705356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-operator-scripts\") pod \"neutron-c074-account-create-update-925t5\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.705385 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sfp9\" (UniqueName: \"kubernetes.io/projected/4b2fc6ab-79ba-4a70-905f-2b3f87437296-kube-api-access-9sfp9\") pod \"neutron-db-create-dsfwz\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.706891 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2fc6ab-79ba-4a70-905f-2b3f87437296-operator-scripts\") pod \"neutron-db-create-dsfwz\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.721090 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sfp9\" (UniqueName: \"kubernetes.io/projected/4b2fc6ab-79ba-4a70-905f-2b3f87437296-kube-api-access-9sfp9\") pod \"neutron-db-create-dsfwz\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.814859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8k8h\" (UniqueName: \"kubernetes.io/projected/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-kube-api-access-j8k8h\") pod \"neutron-c074-account-create-update-925t5\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.814943 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-operator-scripts\") pod \"neutron-c074-account-create-update-925t5\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.816719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-operator-scripts\") pod \"neutron-c074-account-create-update-925t5\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.822903 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86j7g" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.837058 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8k8h\" (UniqueName: \"kubernetes.io/projected/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-kube-api-access-j8k8h\") pod \"neutron-c074-account-create-update-925t5\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.924280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:35 crc kubenswrapper[4895]: I1206 07:22:35.950544 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.026801 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7l2zn"] Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.223534 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vtjtg"] Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.238358 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-55c9-account-create-update-g9g8t"] Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.269851 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db70-account-create-update-7v5lv"] Dec 06 07:22:36 crc kubenswrapper[4895]: W1206 07:22:36.285616 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cdfee21_9315_4df0_9ac9_7f02483a05e3.slice/crio-e1e2ebf2603cc858f4a1f8b56f78121972b816a834a3877c195521fe54804ebd WatchSource:0}: Error finding container e1e2ebf2603cc858f4a1f8b56f78121972b816a834a3877c195521fe54804ebd: Status 404 returned error can't find the container with id e1e2ebf2603cc858f4a1f8b56f78121972b816a834a3877c195521fe54804ebd Dec 06 07:22:36 crc kubenswrapper[4895]: W1206 07:22:36.313166 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef61367b_1e67_4933_9dad_c02352f97789.slice/crio-0b14e64b693c02b4b7a89c26434cf80d3eec7fa631e742cae86b566a6eece369 WatchSource:0}: Error finding container 0b14e64b693c02b4b7a89c26434cf80d3eec7fa631e742cae86b566a6eece369: Status 404 returned error can't find the container with id 0b14e64b693c02b4b7a89c26434cf80d3eec7fa631e742cae86b566a6eece369 Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.329371 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-86j7g"] Dec 06 07:22:36 crc kubenswrapper[4895]: W1206 07:22:36.687757 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b2fc6ab_79ba_4a70_905f_2b3f87437296.slice/crio-32f09d14248b41ae393174618d1549b098a754e8b8091d638cddba60769ff44c WatchSource:0}: Error finding container 32f09d14248b41ae393174618d1549b098a754e8b8091d638cddba60769ff44c: Status 404 returned error can't find the container with id 32f09d14248b41ae393174618d1549b098a754e8b8091d638cddba60769ff44c Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.701164 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dsfwz"] Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.777050 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c074-account-create-update-925t5"] Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.877329 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db70-account-create-update-7v5lv" event={"ID":"c822196a-b31e-4f00-9f88-8749f5394fc2","Type":"ContainerStarted","Data":"e0e7b338ba0a1ebcdb361679e39ebce77ee6d41ed0ac4888aaf8a563ab325cf5"} Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.881337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-55c9-account-create-update-g9g8t" event={"ID":"ef61367b-1e67-4933-9dad-c02352f97789","Type":"ContainerStarted","Data":"0b14e64b693c02b4b7a89c26434cf80d3eec7fa631e742cae86b566a6eece369"} Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.884817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vtjtg" event={"ID":"6cdfee21-9315-4df0-9ac9-7f02483a05e3","Type":"ContainerStarted","Data":"e1e2ebf2603cc858f4a1f8b56f78121972b816a834a3877c195521fe54804ebd"} Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.889886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86j7g" event={"ID":"cac89f8a-b835-4356-af05-a02cd8f079ea","Type":"ContainerStarted","Data":"f60021497d5fe1df2e20d6cd83beed203a2c477037df4e5e0af1db315f47524e"} Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.891081 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dsfwz" event={"ID":"4b2fc6ab-79ba-4a70-905f-2b3f87437296","Type":"ContainerStarted","Data":"32f09d14248b41ae393174618d1549b098a754e8b8091d638cddba60769ff44c"} Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.892057 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l2zn" event={"ID":"ad446d6d-ed17-4ce4-8cae-0d570aa84483","Type":"ContainerStarted","Data":"b612ceadfc79cc011f548231ccf3915cb94aa8b3084d1846677f4d3fb31f15c6"} Dec 06 07:22:36 crc kubenswrapper[4895]: I1206 07:22:36.893219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c074-account-create-update-925t5" event={"ID":"d3922fe8-d6ed-4204-90b7-e90cbde97e1b","Type":"ContainerStarted","Data":"44e3d7565d93c31b6869c31ca1b0c7fd14dad07809d9d2032f0eb301fc5a5849"} Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.217707 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-k87l7"] Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.219841 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.228578 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k87l7"] Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.250494 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ccf0195-32f5-499f-95d0-ee3996f78016-operator-scripts\") pod \"barbican-db-create-k87l7\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.250620 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7l67\" (UniqueName: \"kubernetes.io/projected/4ccf0195-32f5-499f-95d0-ee3996f78016-kube-api-access-b7l67\") pod \"barbican-db-create-k87l7\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.289276 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-18b2-account-create-update-wdx9r"] Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.291603 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.294161 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.300491 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-18b2-account-create-update-wdx9r"] Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.355406 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ccf0195-32f5-499f-95d0-ee3996f78016-operator-scripts\") pod \"barbican-db-create-k87l7\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.355509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvx5\" (UniqueName: \"kubernetes.io/projected/e181a816-272c-4eb5-8ff3-0b920d27d996-kube-api-access-7qvx5\") pod \"barbican-18b2-account-create-update-wdx9r\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.355581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7l67\" (UniqueName: \"kubernetes.io/projected/4ccf0195-32f5-499f-95d0-ee3996f78016-kube-api-access-b7l67\") pod \"barbican-db-create-k87l7\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.355651 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e181a816-272c-4eb5-8ff3-0b920d27d996-operator-scripts\") pod \"barbican-18b2-account-create-update-wdx9r\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.356426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ccf0195-32f5-499f-95d0-ee3996f78016-operator-scripts\") pod \"barbican-db-create-k87l7\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.359347 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.395131 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7l67\" (UniqueName: \"kubernetes.io/projected/4ccf0195-32f5-499f-95d0-ee3996f78016-kube-api-access-b7l67\") pod \"barbican-db-create-k87l7\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.457274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e181a816-272c-4eb5-8ff3-0b920d27d996-operator-scripts\") pod \"barbican-18b2-account-create-update-wdx9r\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.457404 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvx5\" (UniqueName: \"kubernetes.io/projected/e181a816-272c-4eb5-8ff3-0b920d27d996-kube-api-access-7qvx5\") pod \"barbican-18b2-account-create-update-wdx9r\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.458007 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e181a816-272c-4eb5-8ff3-0b920d27d996-operator-scripts\") pod \"barbican-18b2-account-create-update-wdx9r\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.481234 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvx5\" (UniqueName: \"kubernetes.io/projected/e181a816-272c-4eb5-8ff3-0b920d27d996-kube-api-access-7qvx5\") pod \"barbican-18b2-account-create-update-wdx9r\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.538510 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:37 crc kubenswrapper[4895]: I1206 07:22:37.757519 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.104199 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k87l7"] Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.234935 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-18b2-account-create-update-wdx9r"] Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.912228 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k87l7" event={"ID":"4ccf0195-32f5-499f-95d0-ee3996f78016","Type":"ContainerStarted","Data":"8a1b5763ec47db56d8a3d960d8777c8db8c3fc314a29bacd5c4abccec0f31148"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.912605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k87l7" event={"ID":"4ccf0195-32f5-499f-95d0-ee3996f78016","Type":"ContainerStarted","Data":"28a8479f87cc6d107cea890b51870efa2a6ae69e398a1615e2a25ec003642786"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.915511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dsfwz" event={"ID":"4b2fc6ab-79ba-4a70-905f-2b3f87437296","Type":"ContainerStarted","Data":"903a61307cb2aad6db35d5637fc506893147f1633d2d088458f6893942be9522"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.917669 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l2zn" event={"ID":"ad446d6d-ed17-4ce4-8cae-0d570aa84483","Type":"ContainerStarted","Data":"bfb674c6a5da2b3cedce99fd5b05c4ff83c3d7fbbbb50b8baa1146520aa3235d"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.919962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c074-account-create-update-925t5" event={"ID":"d3922fe8-d6ed-4204-90b7-e90cbde97e1b","Type":"ContainerStarted","Data":"f99e8cb999be03cf9e185900bd38c8c7207dd178e7aa039c48d115277b98f6d7"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.922293 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db70-account-create-update-7v5lv" event={"ID":"c822196a-b31e-4f00-9f88-8749f5394fc2","Type":"ContainerStarted","Data":"d412c2ee11d8f4f926bebc48cbf18089f618c5e089a94582d14e647031554d1a"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.934629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-55c9-account-create-update-g9g8t" event={"ID":"ef61367b-1e67-4933-9dad-c02352f97789","Type":"ContainerStarted","Data":"c86bf6e0b8a7f3b3d6f687082507be81f68d6591166955ceee6b1d2dca543c4d"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.938604 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vtjtg" event={"ID":"6cdfee21-9315-4df0-9ac9-7f02483a05e3","Type":"ContainerStarted","Data":"1cc0732ee9960229ad2e7c33a85923ad4eec361fd616faf483d86023b612af30"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.939605 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-k87l7" podStartSLOduration=1.9395785989999998 podStartE2EDuration="1.939578599s" podCreationTimestamp="2025-12-06 07:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:38.93174074 +0000 UTC m=+1521.333129610" watchObservedRunningTime="2025-12-06 07:22:38.939578599 +0000 UTC m=+1521.340967479" Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.941506 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-18b2-account-create-update-wdx9r" event={"ID":"e181a816-272c-4eb5-8ff3-0b920d27d996","Type":"ContainerStarted","Data":"646c7cf440f5622553df0b11a1660ff28b88b59df2381deaedd5c44b26c3a8a9"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.941809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-18b2-account-create-update-wdx9r" event={"ID":"e181a816-272c-4eb5-8ff3-0b920d27d996","Type":"ContainerStarted","Data":"561d8f08340a70ead33aedac2a05fd1b87e3b94a6653027fa17fc118ba719c7b"} Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.961231 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c074-account-create-update-925t5" podStartSLOduration=3.961210478 podStartE2EDuration="3.961210478s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:38.954629942 +0000 UTC m=+1521.356018812" watchObservedRunningTime="2025-12-06 07:22:38.961210478 +0000 UTC m=+1521.362599338" Dec 06 07:22:38 crc kubenswrapper[4895]: I1206 07:22:38.979543 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-7l2zn" podStartSLOduration=3.979523938 podStartE2EDuration="3.979523938s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:38.975010196 +0000 UTC m=+1521.376399066" watchObservedRunningTime="2025-12-06 07:22:38.979523938 +0000 UTC m=+1521.380912808" Dec 06 07:22:39 crc kubenswrapper[4895]: I1206 07:22:39.011522 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-dsfwz" podStartSLOduration=4.011499173 podStartE2EDuration="4.011499173s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:39.001747583 +0000 UTC m=+1521.403136453" watchObservedRunningTime="2025-12-06 07:22:39.011499173 +0000 UTC m=+1521.412888043" Dec 06 07:22:39 crc kubenswrapper[4895]: I1206 07:22:39.026419 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db70-account-create-update-7v5lv" podStartSLOduration=4.026399531 podStartE2EDuration="4.026399531s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:39.022035425 +0000 UTC m=+1521.423424305" watchObservedRunningTime="2025-12-06 07:22:39.026399531 +0000 UTC m=+1521.427788401" Dec 06 07:22:39 crc kubenswrapper[4895]: I1206 07:22:39.052677 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-55c9-account-create-update-g9g8t" podStartSLOduration=4.052653654 podStartE2EDuration="4.052653654s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:39.040593431 +0000 UTC m=+1521.441982301" watchObservedRunningTime="2025-12-06 07:22:39.052653654 +0000 UTC m=+1521.454042524" Dec 06 07:22:39 crc kubenswrapper[4895]: I1206 07:22:39.102437 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vtjtg" podStartSLOduration=4.102417195 podStartE2EDuration="4.102417195s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:39.086788577 +0000 UTC m=+1521.488177447" watchObservedRunningTime="2025-12-06 07:22:39.102417195 +0000 UTC m=+1521.503806065" Dec 06 07:22:39 crc kubenswrapper[4895]: I1206 07:22:39.957928 4895 generic.go:334] "Generic (PLEG): container finished" podID="6cdfee21-9315-4df0-9ac9-7f02483a05e3" containerID="1cc0732ee9960229ad2e7c33a85923ad4eec361fd616faf483d86023b612af30" exitCode=0 Dec 06 07:22:39 crc kubenswrapper[4895]: I1206 07:22:39.959141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vtjtg" event={"ID":"6cdfee21-9315-4df0-9ac9-7f02483a05e3","Type":"ContainerDied","Data":"1cc0732ee9960229ad2e7c33a85923ad4eec361fd616faf483d86023b612af30"} Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.006620 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-18b2-account-create-update-wdx9r" podStartSLOduration=3.006593395 podStartE2EDuration="3.006593395s" podCreationTimestamp="2025-12-06 07:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:40.00416386 +0000 UTC m=+1522.405552720" watchObservedRunningTime="2025-12-06 07:22:40.006593395 +0000 UTC m=+1522.407982265" Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.972546 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ccf0195-32f5-499f-95d0-ee3996f78016" containerID="8a1b5763ec47db56d8a3d960d8777c8db8c3fc314a29bacd5c4abccec0f31148" exitCode=0 Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.972640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k87l7" event={"ID":"4ccf0195-32f5-499f-95d0-ee3996f78016","Type":"ContainerDied","Data":"8a1b5763ec47db56d8a3d960d8777c8db8c3fc314a29bacd5c4abccec0f31148"} Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.979636 4895 generic.go:334] "Generic (PLEG): container finished" podID="4b2fc6ab-79ba-4a70-905f-2b3f87437296" containerID="903a61307cb2aad6db35d5637fc506893147f1633d2d088458f6893942be9522" exitCode=0 Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.979724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dsfwz" event={"ID":"4b2fc6ab-79ba-4a70-905f-2b3f87437296","Type":"ContainerDied","Data":"903a61307cb2aad6db35d5637fc506893147f1633d2d088458f6893942be9522"} Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.982231 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad446d6d-ed17-4ce4-8cae-0d570aa84483" containerID="bfb674c6a5da2b3cedce99fd5b05c4ff83c3d7fbbbb50b8baa1146520aa3235d" exitCode=0 Dec 06 07:22:40 crc kubenswrapper[4895]: I1206 07:22:40.982429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l2zn" event={"ID":"ad446d6d-ed17-4ce4-8cae-0d570aa84483","Type":"ContainerDied","Data":"bfb674c6a5da2b3cedce99fd5b05c4ff83c3d7fbbbb50b8baa1146520aa3235d"} Dec 06 07:22:41 crc kubenswrapper[4895]: I1206 07:22:41.902548 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784d65c867-skwcf"] Dec 06 07:22:41 crc kubenswrapper[4895]: I1206 07:22:41.904502 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:41 crc kubenswrapper[4895]: I1206 07:22:41.914978 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-skwcf"] Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.004552 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef61367b-1e67-4933-9dad-c02352f97789" containerID="c86bf6e0b8a7f3b3d6f687082507be81f68d6591166955ceee6b1d2dca543c4d" exitCode=0 Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.004678 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-55c9-account-create-update-g9g8t" event={"ID":"ef61367b-1e67-4933-9dad-c02352f97789","Type":"ContainerDied","Data":"c86bf6e0b8a7f3b3d6f687082507be81f68d6591166955ceee6b1d2dca543c4d"} Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.022930 4895 generic.go:334] "Generic (PLEG): container finished" podID="e181a816-272c-4eb5-8ff3-0b920d27d996" containerID="646c7cf440f5622553df0b11a1660ff28b88b59df2381deaedd5c44b26c3a8a9" exitCode=0 Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.023105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-18b2-account-create-update-wdx9r" event={"ID":"e181a816-272c-4eb5-8ff3-0b920d27d996","Type":"ContainerDied","Data":"646c7cf440f5622553df0b11a1660ff28b88b59df2381deaedd5c44b26c3a8a9"} Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.032534 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3922fe8-d6ed-4204-90b7-e90cbde97e1b" containerID="f99e8cb999be03cf9e185900bd38c8c7207dd178e7aa039c48d115277b98f6d7" exitCode=0 Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.032621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c074-account-create-update-925t5" event={"ID":"d3922fe8-d6ed-4204-90b7-e90cbde97e1b","Type":"ContainerDied","Data":"f99e8cb999be03cf9e185900bd38c8c7207dd178e7aa039c48d115277b98f6d7"} Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.045510 4895 generic.go:334] "Generic (PLEG): container finished" podID="c822196a-b31e-4f00-9f88-8749f5394fc2" containerID="d412c2ee11d8f4f926bebc48cbf18089f618c5e089a94582d14e647031554d1a" exitCode=0 Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.045738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db70-account-create-update-7v5lv" event={"ID":"c822196a-b31e-4f00-9f88-8749f5394fc2","Type":"ContainerDied","Data":"d412c2ee11d8f4f926bebc48cbf18089f618c5e089a94582d14e647031554d1a"} Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.066856 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-config\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.066970 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-nb\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.067006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w68x\" (UniqueName: \"kubernetes.io/projected/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-kube-api-access-8w68x\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.067031 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-dns-svc\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.067124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-sb\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.168629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-config\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.168710 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-dns-svc\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.168728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-nb\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.168744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w68x\" (UniqueName: \"kubernetes.io/projected/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-kube-api-access-8w68x\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.168814 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-sb\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.169915 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-sb\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.170215 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-dns-svc\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.170651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-nb\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.170844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-config\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.206033 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w68x\" (UniqueName: \"kubernetes.io/projected/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-kube-api-access-8w68x\") pod \"dnsmasq-dns-784d65c867-skwcf\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:42 crc kubenswrapper[4895]: I1206 07:22:42.250406 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.083562 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.102225 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.105208 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.105442 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.105611 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.105978 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9679l" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.127692 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.245746 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.245818 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-lock\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.246003 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcs95\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-kube-api-access-pcs95\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.246113 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-cache\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.246141 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.348556 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcs95\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-kube-api-access-pcs95\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.348639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-cache\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.348671 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.348765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.348799 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-lock\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: E1206 07:22:43.348934 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 07:22:43 crc kubenswrapper[4895]: E1206 07:22:43.348968 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 07:22:43 crc kubenswrapper[4895]: E1206 07:22:43.349031 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift podName:43a2bfd7-f0c6-4b55-b629-2e11d6b45a42 nodeName:}" failed. No retries permitted until 2025-12-06 07:22:43.849001676 +0000 UTC m=+1526.250390546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift") pod "swift-storage-0" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42") : configmap "swift-ring-files" not found Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.349373 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.350060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-cache\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.350142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-lock\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.373119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcs95\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-kube-api-access-pcs95\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.383531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.736365 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8ntsw"] Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.738311 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.741641 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.741699 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.744972 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.749949 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8ntsw"] Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.777850 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-dispersionconf\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.777899 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626gg\" (UniqueName: \"kubernetes.io/projected/4f394cd1-aa14-48fa-8643-30d896f0823e-kube-api-access-626gg\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.777931 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.778005 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-ring-data-devices\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.778023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4f394cd1-aa14-48fa-8643-30d896f0823e-etc-swift\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.778071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.778095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-scripts\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-dispersionconf\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886191 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626gg\" (UniqueName: \"kubernetes.io/projected/4f394cd1-aa14-48fa-8643-30d896f0823e-kube-api-access-626gg\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886244 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886320 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886368 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-ring-data-devices\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886391 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4f394cd1-aa14-48fa-8643-30d896f0823e-etc-swift\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: E1206 07:22:43.886510 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 07:22:43 crc kubenswrapper[4895]: E1206 07:22:43.886540 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 07:22:43 crc kubenswrapper[4895]: E1206 07:22:43.886599 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift podName:43a2bfd7-f0c6-4b55-b629-2e11d6b45a42 nodeName:}" failed. No retries permitted until 2025-12-06 07:22:44.886578569 +0000 UTC m=+1527.287967449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift") pod "swift-storage-0" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42") : configmap "swift-ring-files" not found Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886510 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.886663 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-scripts\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.887421 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4f394cd1-aa14-48fa-8643-30d896f0823e-etc-swift\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.887729 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-scripts\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.887902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-ring-data-devices\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.893608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-dispersionconf\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.893881 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.909274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:43 crc kubenswrapper[4895]: I1206 07:22:43.913343 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626gg\" (UniqueName: \"kubernetes.io/projected/4f394cd1-aa14-48fa-8643-30d896f0823e-kube-api-access-626gg\") pod \"swift-ring-rebalance-8ntsw\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.059333 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.872432 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.878877 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.888801 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.907685 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8k8h\" (UniqueName: \"kubernetes.io/projected/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-kube-api-access-j8k8h\") pod \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.907834 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef61367b-1e67-4933-9dad-c02352f97789-operator-scripts\") pod \"ef61367b-1e67-4933-9dad-c02352f97789\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.907874 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-operator-scripts\") pod \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\" (UID: \"d3922fe8-d6ed-4204-90b7-e90cbde97e1b\") " Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.907933 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qvx5\" (UniqueName: \"kubernetes.io/projected/e181a816-272c-4eb5-8ff3-0b920d27d996-kube-api-access-7qvx5\") pod \"e181a816-272c-4eb5-8ff3-0b920d27d996\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.907999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgqs6\" (UniqueName: \"kubernetes.io/projected/ef61367b-1e67-4933-9dad-c02352f97789-kube-api-access-xgqs6\") pod \"ef61367b-1e67-4933-9dad-c02352f97789\" (UID: \"ef61367b-1e67-4933-9dad-c02352f97789\") " Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.908034 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e181a816-272c-4eb5-8ff3-0b920d27d996-operator-scripts\") pod \"e181a816-272c-4eb5-8ff3-0b920d27d996\" (UID: \"e181a816-272c-4eb5-8ff3-0b920d27d996\") " Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.908404 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:44 crc kubenswrapper[4895]: E1206 07:22:44.908833 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 07:22:44 crc kubenswrapper[4895]: E1206 07:22:44.908852 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 07:22:44 crc kubenswrapper[4895]: E1206 07:22:44.909026 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift podName:43a2bfd7-f0c6-4b55-b629-2e11d6b45a42 nodeName:}" failed. No retries permitted until 2025-12-06 07:22:46.909008603 +0000 UTC m=+1529.310397473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift") pod "swift-storage-0" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42") : configmap "swift-ring-files" not found Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.912319 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef61367b-1e67-4933-9dad-c02352f97789-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef61367b-1e67-4933-9dad-c02352f97789" (UID: "ef61367b-1e67-4933-9dad-c02352f97789"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.913924 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3922fe8-d6ed-4204-90b7-e90cbde97e1b" (UID: "d3922fe8-d6ed-4204-90b7-e90cbde97e1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.920689 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e181a816-272c-4eb5-8ff3-0b920d27d996-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e181a816-272c-4eb5-8ff3-0b920d27d996" (UID: "e181a816-272c-4eb5-8ff3-0b920d27d996"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.920980 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-kube-api-access-j8k8h" (OuterVolumeSpecName: "kube-api-access-j8k8h") pod "d3922fe8-d6ed-4204-90b7-e90cbde97e1b" (UID: "d3922fe8-d6ed-4204-90b7-e90cbde97e1b"). InnerVolumeSpecName "kube-api-access-j8k8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.921047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef61367b-1e67-4933-9dad-c02352f97789-kube-api-access-xgqs6" (OuterVolumeSpecName: "kube-api-access-xgqs6") pod "ef61367b-1e67-4933-9dad-c02352f97789" (UID: "ef61367b-1e67-4933-9dad-c02352f97789"). InnerVolumeSpecName "kube-api-access-xgqs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.922565 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.923008 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:44 crc kubenswrapper[4895]: I1206 07:22:44.933021 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e181a816-272c-4eb5-8ff3-0b920d27d996-kube-api-access-7qvx5" (OuterVolumeSpecName: "kube-api-access-7qvx5") pod "e181a816-272c-4eb5-8ff3-0b920d27d996" (UID: "e181a816-272c-4eb5-8ff3-0b920d27d996"). InnerVolumeSpecName "kube-api-access-7qvx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.012787 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ccf0195-32f5-499f-95d0-ee3996f78016-operator-scripts\") pod \"4ccf0195-32f5-499f-95d0-ee3996f78016\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.013498 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ccf0195-32f5-499f-95d0-ee3996f78016-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ccf0195-32f5-499f-95d0-ee3996f78016" (UID: "4ccf0195-32f5-499f-95d0-ee3996f78016"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.012798 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.013542 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad446d6d-ed17-4ce4-8cae-0d570aa84483-operator-scripts\") pod \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.013641 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7l67\" (UniqueName: \"kubernetes.io/projected/4ccf0195-32f5-499f-95d0-ee3996f78016-kube-api-access-b7l67\") pod \"4ccf0195-32f5-499f-95d0-ee3996f78016\" (UID: \"4ccf0195-32f5-499f-95d0-ee3996f78016\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.013813 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2cpq\" (UniqueName: \"kubernetes.io/projected/ad446d6d-ed17-4ce4-8cae-0d570aa84483-kube-api-access-f2cpq\") pod \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\" (UID: \"ad446d6d-ed17-4ce4-8cae-0d570aa84483\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014451 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8k8h\" (UniqueName: \"kubernetes.io/projected/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-kube-api-access-j8k8h\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014672 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef61367b-1e67-4933-9dad-c02352f97789-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014698 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922fe8-d6ed-4204-90b7-e90cbde97e1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014713 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qvx5\" (UniqueName: \"kubernetes.io/projected/e181a816-272c-4eb5-8ff3-0b920d27d996-kube-api-access-7qvx5\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014727 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgqs6\" (UniqueName: \"kubernetes.io/projected/ef61367b-1e67-4933-9dad-c02352f97789-kube-api-access-xgqs6\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014739 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e181a816-272c-4eb5-8ff3-0b920d27d996-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.014778 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ccf0195-32f5-499f-95d0-ee3996f78016-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.018283 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad446d6d-ed17-4ce4-8cae-0d570aa84483-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad446d6d-ed17-4ce4-8cae-0d570aa84483" (UID: "ad446d6d-ed17-4ce4-8cae-0d570aa84483"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.021789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad446d6d-ed17-4ce4-8cae-0d570aa84483-kube-api-access-f2cpq" (OuterVolumeSpecName: "kube-api-access-f2cpq") pod "ad446d6d-ed17-4ce4-8cae-0d570aa84483" (UID: "ad446d6d-ed17-4ce4-8cae-0d570aa84483"). InnerVolumeSpecName "kube-api-access-f2cpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.021725 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ccf0195-32f5-499f-95d0-ee3996f78016-kube-api-access-b7l67" (OuterVolumeSpecName: "kube-api-access-b7l67") pod "4ccf0195-32f5-499f-95d0-ee3996f78016" (UID: "4ccf0195-32f5-499f-95d0-ee3996f78016"). InnerVolumeSpecName "kube-api-access-b7l67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.022182 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.072318 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.080416 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vtjtg" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.081144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vtjtg" event={"ID":"6cdfee21-9315-4df0-9ac9-7f02483a05e3","Type":"ContainerDied","Data":"e1e2ebf2603cc858f4a1f8b56f78121972b816a834a3877c195521fe54804ebd"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.081429 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e2ebf2603cc858f4a1f8b56f78121972b816a834a3877c195521fe54804ebd" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.082333 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-18b2-account-create-update-wdx9r" event={"ID":"e181a816-272c-4eb5-8ff3-0b920d27d996","Type":"ContainerDied","Data":"561d8f08340a70ead33aedac2a05fd1b87e3b94a6653027fa17fc118ba719c7b"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.082356 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561d8f08340a70ead33aedac2a05fd1b87e3b94a6653027fa17fc118ba719c7b" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.082449 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-18b2-account-create-update-wdx9r" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.083927 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k87l7" event={"ID":"4ccf0195-32f5-499f-95d0-ee3996f78016","Type":"ContainerDied","Data":"28a8479f87cc6d107cea890b51870efa2a6ae69e398a1615e2a25ec003642786"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.084232 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a8479f87cc6d107cea890b51870efa2a6ae69e398a1615e2a25ec003642786" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.083990 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k87l7" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.088970 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dsfwz" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.089428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dsfwz" event={"ID":"4b2fc6ab-79ba-4a70-905f-2b3f87437296","Type":"ContainerDied","Data":"32f09d14248b41ae393174618d1549b098a754e8b8091d638cddba60769ff44c"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.089494 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f09d14248b41ae393174618d1549b098a754e8b8091d638cddba60769ff44c" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.091577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l2zn" event={"ID":"ad446d6d-ed17-4ce4-8cae-0d570aa84483","Type":"ContainerDied","Data":"b612ceadfc79cc011f548231ccf3915cb94aa8b3084d1846677f4d3fb31f15c6"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.091615 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b612ceadfc79cc011f548231ccf3915cb94aa8b3084d1846677f4d3fb31f15c6" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.091681 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l2zn" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.135581 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sfp9\" (UniqueName: \"kubernetes.io/projected/4b2fc6ab-79ba-4a70-905f-2b3f87437296-kube-api-access-9sfp9\") pod \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.135662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6x9p\" (UniqueName: \"kubernetes.io/projected/6cdfee21-9315-4df0-9ac9-7f02483a05e3-kube-api-access-z6x9p\") pod \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.135717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf2bt\" (UniqueName: \"kubernetes.io/projected/c822196a-b31e-4f00-9f88-8749f5394fc2-kube-api-access-vf2bt\") pod \"c822196a-b31e-4f00-9f88-8749f5394fc2\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.135755 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2fc6ab-79ba-4a70-905f-2b3f87437296-operator-scripts\") pod \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\" (UID: \"4b2fc6ab-79ba-4a70-905f-2b3f87437296\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.135859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cdfee21-9315-4df0-9ac9-7f02483a05e3-operator-scripts\") pod \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\" (UID: \"6cdfee21-9315-4df0-9ac9-7f02483a05e3\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.135966 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c822196a-b31e-4f00-9f88-8749f5394fc2-operator-scripts\") pod \"c822196a-b31e-4f00-9f88-8749f5394fc2\" (UID: \"c822196a-b31e-4f00-9f88-8749f5394fc2\") " Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.138104 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7l67\" (UniqueName: \"kubernetes.io/projected/4ccf0195-32f5-499f-95d0-ee3996f78016-kube-api-access-b7l67\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.138136 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2cpq\" (UniqueName: \"kubernetes.io/projected/ad446d6d-ed17-4ce4-8cae-0d570aa84483-kube-api-access-f2cpq\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.138151 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad446d6d-ed17-4ce4-8cae-0d570aa84483-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.138942 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2fc6ab-79ba-4a70-905f-2b3f87437296-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b2fc6ab-79ba-4a70-905f-2b3f87437296" (UID: "4b2fc6ab-79ba-4a70-905f-2b3f87437296"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.138993 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c822196a-b31e-4f00-9f88-8749f5394fc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c822196a-b31e-4f00-9f88-8749f5394fc2" (UID: "c822196a-b31e-4f00-9f88-8749f5394fc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.144776 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c822196a-b31e-4f00-9f88-8749f5394fc2-kube-api-access-vf2bt" (OuterVolumeSpecName: "kube-api-access-vf2bt") pod "c822196a-b31e-4f00-9f88-8749f5394fc2" (UID: "c822196a-b31e-4f00-9f88-8749f5394fc2"). InnerVolumeSpecName "kube-api-access-vf2bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.145416 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c074-account-create-update-925t5" event={"ID":"d3922fe8-d6ed-4204-90b7-e90cbde97e1b","Type":"ContainerDied","Data":"44e3d7565d93c31b6869c31ca1b0c7fd14dad07809d9d2032f0eb301fc5a5849"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.145500 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e3d7565d93c31b6869c31ca1b0c7fd14dad07809d9d2032f0eb301fc5a5849" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.146030 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c074-account-create-update-925t5" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.148521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db70-account-create-update-7v5lv" event={"ID":"c822196a-b31e-4f00-9f88-8749f5394fc2","Type":"ContainerDied","Data":"e0e7b338ba0a1ebcdb361679e39ebce77ee6d41ed0ac4888aaf8a563ab325cf5"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.148591 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e7b338ba0a1ebcdb361679e39ebce77ee6d41ed0ac4888aaf8a563ab325cf5" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.150893 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db70-account-create-update-7v5lv" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.156345 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-55c9-account-create-update-g9g8t" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.157272 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-55c9-account-create-update-g9g8t" event={"ID":"ef61367b-1e67-4933-9dad-c02352f97789","Type":"ContainerDied","Data":"0b14e64b693c02b4b7a89c26434cf80d3eec7fa631e742cae86b566a6eece369"} Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.157347 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b14e64b693c02b4b7a89c26434cf80d3eec7fa631e742cae86b566a6eece369" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.157695 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdfee21-9315-4df0-9ac9-7f02483a05e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cdfee21-9315-4df0-9ac9-7f02483a05e3" (UID: "6cdfee21-9315-4df0-9ac9-7f02483a05e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.165342 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2fc6ab-79ba-4a70-905f-2b3f87437296-kube-api-access-9sfp9" (OuterVolumeSpecName: "kube-api-access-9sfp9") pod "4b2fc6ab-79ba-4a70-905f-2b3f87437296" (UID: "4b2fc6ab-79ba-4a70-905f-2b3f87437296"). InnerVolumeSpecName "kube-api-access-9sfp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.173176 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdfee21-9315-4df0-9ac9-7f02483a05e3-kube-api-access-z6x9p" (OuterVolumeSpecName: "kube-api-access-z6x9p") pod "6cdfee21-9315-4df0-9ac9-7f02483a05e3" (UID: "6cdfee21-9315-4df0-9ac9-7f02483a05e3"). InnerVolumeSpecName "kube-api-access-z6x9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.240013 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sfp9\" (UniqueName: \"kubernetes.io/projected/4b2fc6ab-79ba-4a70-905f-2b3f87437296-kube-api-access-9sfp9\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.240056 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6x9p\" (UniqueName: \"kubernetes.io/projected/6cdfee21-9315-4df0-9ac9-7f02483a05e3-kube-api-access-z6x9p\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.240068 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf2bt\" (UniqueName: \"kubernetes.io/projected/c822196a-b31e-4f00-9f88-8749f5394fc2-kube-api-access-vf2bt\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.240117 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2fc6ab-79ba-4a70-905f-2b3f87437296-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.240126 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cdfee21-9315-4df0-9ac9-7f02483a05e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.240135 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c822196a-b31e-4f00-9f88-8749f5394fc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.326354 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-skwcf"] Dec 06 07:22:45 crc kubenswrapper[4895]: I1206 07:22:45.365377 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8ntsw"] Dec 06 07:22:46 crc kubenswrapper[4895]: I1206 07:22:46.173812 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerID="e764a75a2cd2d52795373b799ec062204ee2d9e85907436932c36e5ddfcdf6e8" exitCode=0 Dec 06 07:22:46 crc kubenswrapper[4895]: I1206 07:22:46.174589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-skwcf" event={"ID":"1ec3bc59-2ad7-4af0-837b-61e1254a50f7","Type":"ContainerDied","Data":"e764a75a2cd2d52795373b799ec062204ee2d9e85907436932c36e5ddfcdf6e8"} Dec 06 07:22:46 crc kubenswrapper[4895]: I1206 07:22:46.174651 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-skwcf" event={"ID":"1ec3bc59-2ad7-4af0-837b-61e1254a50f7","Type":"ContainerStarted","Data":"98eac78853b58d74f4806f07f704a670e6b774b21f6646d5f20fe14c4aa91388"} Dec 06 07:22:46 crc kubenswrapper[4895]: I1206 07:22:46.183007 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8ntsw" event={"ID":"4f394cd1-aa14-48fa-8643-30d896f0823e","Type":"ContainerStarted","Data":"d2dc089a26e4475d743a05e2db0138fe013482625cb127740a9b870c7c71c3e5"} Dec 06 07:22:46 crc kubenswrapper[4895]: I1206 07:22:46.985555 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:46 crc kubenswrapper[4895]: E1206 07:22:46.985773 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 07:22:46 crc kubenswrapper[4895]: E1206 07:22:46.986064 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 07:22:46 crc kubenswrapper[4895]: E1206 07:22:46.986131 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift podName:43a2bfd7-f0c6-4b55-b629-2e11d6b45a42 nodeName:}" failed. No retries permitted until 2025-12-06 07:22:50.986110972 +0000 UTC m=+1533.387499852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift") pod "swift-storage-0" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42") : configmap "swift-ring-files" not found Dec 06 07:22:47 crc kubenswrapper[4895]: I1206 07:22:47.198028 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86j7g" event={"ID":"cac89f8a-b835-4356-af05-a02cd8f079ea","Type":"ContainerStarted","Data":"0ba8a7725e3e059cced115d45bb53a3395ed65609a2182433586613416231fdd"} Dec 06 07:22:47 crc kubenswrapper[4895]: I1206 07:22:47.200954 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-skwcf" event={"ID":"1ec3bc59-2ad7-4af0-837b-61e1254a50f7","Type":"ContainerStarted","Data":"86f16c170b2f21bbb35557008ca31ed57aef081af845e400495674322552d2a5"} Dec 06 07:22:47 crc kubenswrapper[4895]: I1206 07:22:47.201417 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:47 crc kubenswrapper[4895]: I1206 07:22:47.225273 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-86j7g" podStartSLOduration=2.648100008 podStartE2EDuration="12.22523953s" podCreationTimestamp="2025-12-06 07:22:35 +0000 UTC" firstStartedPulling="2025-12-06 07:22:36.420164326 +0000 UTC m=+1518.821553196" lastFinishedPulling="2025-12-06 07:22:45.997303848 +0000 UTC m=+1528.398692718" observedRunningTime="2025-12-06 07:22:47.224381247 +0000 UTC m=+1529.625770127" watchObservedRunningTime="2025-12-06 07:22:47.22523953 +0000 UTC m=+1529.626628400" Dec 06 07:22:47 crc kubenswrapper[4895]: I1206 07:22:47.260328 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podStartSLOduration=6.260306428 podStartE2EDuration="6.260306428s" podCreationTimestamp="2025-12-06 07:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:22:47.253909147 +0000 UTC m=+1529.655298017" watchObservedRunningTime="2025-12-06 07:22:47.260306428 +0000 UTC m=+1529.661695298" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.585152 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kx92p"] Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586440 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e181a816-272c-4eb5-8ff3-0b920d27d996" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586460 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e181a816-272c-4eb5-8ff3-0b920d27d996" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586524 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdfee21-9315-4df0-9ac9-7f02483a05e3" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586534 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdfee21-9315-4df0-9ac9-7f02483a05e3" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586558 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad446d6d-ed17-4ce4-8cae-0d570aa84483" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586569 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad446d6d-ed17-4ce4-8cae-0d570aa84483" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586599 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3922fe8-d6ed-4204-90b7-e90cbde97e1b" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586608 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3922fe8-d6ed-4204-90b7-e90cbde97e1b" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586627 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c822196a-b31e-4f00-9f88-8749f5394fc2" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586635 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c822196a-b31e-4f00-9f88-8749f5394fc2" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586646 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef61367b-1e67-4933-9dad-c02352f97789" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586654 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef61367b-1e67-4933-9dad-c02352f97789" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586667 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2fc6ab-79ba-4a70-905f-2b3f87437296" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586677 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2fc6ab-79ba-4a70-905f-2b3f87437296" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: E1206 07:22:50.586694 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccf0195-32f5-499f-95d0-ee3996f78016" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586703 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccf0195-32f5-499f-95d0-ee3996f78016" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586943 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef61367b-1e67-4933-9dad-c02352f97789" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.586995 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2fc6ab-79ba-4a70-905f-2b3f87437296" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587022 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3922fe8-d6ed-4204-90b7-e90cbde97e1b" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587046 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c822196a-b31e-4f00-9f88-8749f5394fc2" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587059 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad446d6d-ed17-4ce4-8cae-0d570aa84483" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587073 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ccf0195-32f5-499f-95d0-ee3996f78016" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587086 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e181a816-272c-4eb5-8ff3-0b920d27d996" containerName="mariadb-account-create-update" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587101 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdfee21-9315-4df0-9ac9-7f02483a05e3" containerName="mariadb-database-create" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.587930 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.590294 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wj8l7" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.594687 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.602725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kx92p"] Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.660338 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-config-data\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.660463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-combined-ca-bundle\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.660669 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-db-sync-config-data\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.661549 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8wr\" (UniqueName: \"kubernetes.io/projected/34b001b3-7a17-444d-8dd9-5e296f84770b-kube-api-access-qk8wr\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.763454 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-config-data\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.763553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-combined-ca-bundle\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.763661 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-db-sync-config-data\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.763754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8wr\" (UniqueName: \"kubernetes.io/projected/34b001b3-7a17-444d-8dd9-5e296f84770b-kube-api-access-qk8wr\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.770817 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-combined-ca-bundle\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.771539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-config-data\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.771651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-db-sync-config-data\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.783131 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8wr\" (UniqueName: \"kubernetes.io/projected/34b001b3-7a17-444d-8dd9-5e296f84770b-kube-api-access-qk8wr\") pod \"glance-db-sync-kx92p\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:50 crc kubenswrapper[4895]: I1206 07:22:50.912302 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kx92p" Dec 06 07:22:51 crc kubenswrapper[4895]: I1206 07:22:51.070066 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:51 crc kubenswrapper[4895]: E1206 07:22:51.070245 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 07:22:51 crc kubenswrapper[4895]: E1206 07:22:51.070564 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 07:22:51 crc kubenswrapper[4895]: E1206 07:22:51.070644 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift podName:43a2bfd7-f0c6-4b55-b629-2e11d6b45a42 nodeName:}" failed. No retries permitted until 2025-12-06 07:22:59.070629428 +0000 UTC m=+1541.472018298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift") pod "swift-storage-0" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42") : configmap "swift-ring-files" not found Dec 06 07:22:51 crc kubenswrapper[4895]: W1206 07:22:51.480428 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b001b3_7a17_444d_8dd9_5e296f84770b.slice/crio-b826f7fbb98577a044dac7a394ec25915100d01f3b9b2e29f638ef1ce769444c WatchSource:0}: Error finding container b826f7fbb98577a044dac7a394ec25915100d01f3b9b2e29f638ef1ce769444c: Status 404 returned error can't find the container with id b826f7fbb98577a044dac7a394ec25915100d01f3b9b2e29f638ef1ce769444c Dec 06 07:22:51 crc kubenswrapper[4895]: I1206 07:22:51.483289 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kx92p"] Dec 06 07:22:52 crc kubenswrapper[4895]: I1206 07:22:52.253342 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:22:52 crc kubenswrapper[4895]: I1206 07:22:52.298765 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8ntsw" event={"ID":"4f394cd1-aa14-48fa-8643-30d896f0823e","Type":"ContainerStarted","Data":"af1f9c0dc7c7f332389b729b0dbd7801804e28f7a5866ffe130f4e337a03960e"} Dec 06 07:22:52 crc kubenswrapper[4895]: I1206 07:22:52.301977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kx92p" event={"ID":"34b001b3-7a17-444d-8dd9-5e296f84770b","Type":"ContainerStarted","Data":"b826f7fbb98577a044dac7a394ec25915100d01f3b9b2e29f638ef1ce769444c"} Dec 06 07:22:52 crc kubenswrapper[4895]: I1206 07:22:52.330659 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-t5x8j"] Dec 06 07:22:52 crc kubenswrapper[4895]: I1206 07:22:52.330989 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" containerName="dnsmasq-dns" containerID="cri-o://19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d" gracePeriod=10 Dec 06 07:22:52 crc kubenswrapper[4895]: I1206 07:22:52.331885 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8ntsw" podStartSLOduration=3.750403527 podStartE2EDuration="9.331863541s" podCreationTimestamp="2025-12-06 07:22:43 +0000 UTC" firstStartedPulling="2025-12-06 07:22:45.376629853 +0000 UTC m=+1527.778018723" lastFinishedPulling="2025-12-06 07:22:50.958089867 +0000 UTC m=+1533.359478737" observedRunningTime="2025-12-06 07:22:52.319164681 +0000 UTC m=+1534.720553551" watchObservedRunningTime="2025-12-06 07:22:52.331863541 +0000 UTC m=+1534.733252411" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.878890 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.927173 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2rnt\" (UniqueName: \"kubernetes.io/projected/73b833aa-beae-480b-9299-c9ad31acafd7-kube-api-access-t2rnt\") pod \"73b833aa-beae-480b-9299-c9ad31acafd7\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.927396 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-sb\") pod \"73b833aa-beae-480b-9299-c9ad31acafd7\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.927537 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-nb\") pod \"73b833aa-beae-480b-9299-c9ad31acafd7\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.927678 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-config\") pod \"73b833aa-beae-480b-9299-c9ad31acafd7\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.928328 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-dns-svc\") pod \"73b833aa-beae-480b-9299-c9ad31acafd7\" (UID: \"73b833aa-beae-480b-9299-c9ad31acafd7\") " Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.936918 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b833aa-beae-480b-9299-c9ad31acafd7-kube-api-access-t2rnt" (OuterVolumeSpecName: "kube-api-access-t2rnt") pod "73b833aa-beae-480b-9299-c9ad31acafd7" (UID: "73b833aa-beae-480b-9299-c9ad31acafd7"). InnerVolumeSpecName "kube-api-access-t2rnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.974889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73b833aa-beae-480b-9299-c9ad31acafd7" (UID: "73b833aa-beae-480b-9299-c9ad31acafd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.978873 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-config" (OuterVolumeSpecName: "config") pod "73b833aa-beae-480b-9299-c9ad31acafd7" (UID: "73b833aa-beae-480b-9299-c9ad31acafd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:52.986711 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73b833aa-beae-480b-9299-c9ad31acafd7" (UID: "73b833aa-beae-480b-9299-c9ad31acafd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.005643 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73b833aa-beae-480b-9299-c9ad31acafd7" (UID: "73b833aa-beae-480b-9299-c9ad31acafd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.030940 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.030997 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.031009 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.031023 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2rnt\" (UniqueName: \"kubernetes.io/projected/73b833aa-beae-480b-9299-c9ad31acafd7-kube-api-access-t2rnt\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.031037 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b833aa-beae-480b-9299-c9ad31acafd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.316913 4895 generic.go:334] "Generic (PLEG): container finished" podID="73b833aa-beae-480b-9299-c9ad31acafd7" containerID="19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d" exitCode=0 Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.318264 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.320653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" event={"ID":"73b833aa-beae-480b-9299-c9ad31acafd7","Type":"ContainerDied","Data":"19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d"} Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.320713 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-t5x8j" event={"ID":"73b833aa-beae-480b-9299-c9ad31acafd7","Type":"ContainerDied","Data":"d4b75b73795fad501c33e3a0151cb059417e352753c2e044d74c9c0495fc6c04"} Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.320737 4895 scope.go:117] "RemoveContainer" containerID="19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.355216 4895 scope.go:117] "RemoveContainer" containerID="a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.378987 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-t5x8j"] Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.386612 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-t5x8j"] Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.403705 4895 scope.go:117] "RemoveContainer" containerID="19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d" Dec 06 07:22:56 crc kubenswrapper[4895]: E1206 07:22:53.404149 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d\": container with ID starting with 19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d not found: ID does not exist" containerID="19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.404193 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d"} err="failed to get container status \"19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d\": rpc error: code = NotFound desc = could not find container \"19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d\": container with ID starting with 19957def4807e6ac19e20749d34ae5df0c79380f25a36ee5ce88247a1f7adb2d not found: ID does not exist" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.404222 4895 scope.go:117] "RemoveContainer" containerID="a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94" Dec 06 07:22:56 crc kubenswrapper[4895]: E1206 07:22:53.404651 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94\": container with ID starting with a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94 not found: ID does not exist" containerID="a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:53.404667 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94"} err="failed to get container status \"a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94\": rpc error: code = NotFound desc = could not find container \"a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94\": container with ID starting with a3affc340d318ede7e6229edcf6f9ff19a0bd8f75c661aaf13b9651ef9c1fa94 not found: ID does not exist" Dec 06 07:22:56 crc kubenswrapper[4895]: I1206 07:22:54.068065 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" path="/var/lib/kubelet/pods/73b833aa-beae-480b-9299-c9ad31acafd7/volumes" Dec 06 07:22:57 crc kubenswrapper[4895]: I1206 07:22:57.376323 4895 generic.go:334] "Generic (PLEG): container finished" podID="cac89f8a-b835-4356-af05-a02cd8f079ea" containerID="0ba8a7725e3e059cced115d45bb53a3395ed65609a2182433586613416231fdd" exitCode=0 Dec 06 07:22:57 crc kubenswrapper[4895]: I1206 07:22:57.376419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86j7g" event={"ID":"cac89f8a-b835-4356-af05-a02cd8f079ea","Type":"ContainerDied","Data":"0ba8a7725e3e059cced115d45bb53a3395ed65609a2182433586613416231fdd"} Dec 06 07:22:59 crc kubenswrapper[4895]: I1206 07:22:59.167592 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:22:59 crc kubenswrapper[4895]: E1206 07:22:59.167808 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 07:22:59 crc kubenswrapper[4895]: E1206 07:22:59.168050 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 07:22:59 crc kubenswrapper[4895]: E1206 07:22:59.168108 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift podName:43a2bfd7-f0c6-4b55-b629-2e11d6b45a42 nodeName:}" failed. No retries permitted until 2025-12-06 07:23:15.168087936 +0000 UTC m=+1557.569476806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift") pod "swift-storage-0" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42") : configmap "swift-ring-files" not found Dec 06 07:22:59 crc kubenswrapper[4895]: I1206 07:22:59.695408 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:22:59 crc kubenswrapper[4895]: I1206 07:22:59.695732 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.362356 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86j7g" Dec 06 07:23:10 crc kubenswrapper[4895]: E1206 07:23:10.414782 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63" Dec 06 07:23:10 crc kubenswrapper[4895]: E1206 07:23:10.415027 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qk8wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kx92p_openstack(34b001b3-7a17-444d-8dd9-5e296f84770b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:23:10 crc kubenswrapper[4895]: E1206 07:23:10.416197 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kx92p" podUID="34b001b3-7a17-444d-8dd9-5e296f84770b" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.498466 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86j7g" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.498519 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86j7g" event={"ID":"cac89f8a-b835-4356-af05-a02cd8f079ea","Type":"ContainerDied","Data":"f60021497d5fe1df2e20d6cd83beed203a2c477037df4e5e0af1db315f47524e"} Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.498560 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f60021497d5fe1df2e20d6cd83beed203a2c477037df4e5e0af1db315f47524e" Dec 06 07:23:10 crc kubenswrapper[4895]: E1206 07:23:10.499828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63\\\"\"" pod="openstack/glance-db-sync-kx92p" podUID="34b001b3-7a17-444d-8dd9-5e296f84770b" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.517210 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-config-data\") pod \"cac89f8a-b835-4356-af05-a02cd8f079ea\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.517405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrkff\" (UniqueName: \"kubernetes.io/projected/cac89f8a-b835-4356-af05-a02cd8f079ea-kube-api-access-xrkff\") pod \"cac89f8a-b835-4356-af05-a02cd8f079ea\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.517577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-combined-ca-bundle\") pod \"cac89f8a-b835-4356-af05-a02cd8f079ea\" (UID: \"cac89f8a-b835-4356-af05-a02cd8f079ea\") " Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.524334 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac89f8a-b835-4356-af05-a02cd8f079ea-kube-api-access-xrkff" (OuterVolumeSpecName: "kube-api-access-xrkff") pod "cac89f8a-b835-4356-af05-a02cd8f079ea" (UID: "cac89f8a-b835-4356-af05-a02cd8f079ea"). InnerVolumeSpecName "kube-api-access-xrkff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.551345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cac89f8a-b835-4356-af05-a02cd8f079ea" (UID: "cac89f8a-b835-4356-af05-a02cd8f079ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.574489 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-config-data" (OuterVolumeSpecName: "config-data") pod "cac89f8a-b835-4356-af05-a02cd8f079ea" (UID: "cac89f8a-b835-4356-af05-a02cd8f079ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.620437 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrkff\" (UniqueName: \"kubernetes.io/projected/cac89f8a-b835-4356-af05-a02cd8f079ea-kube-api-access-xrkff\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.620490 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:10 crc kubenswrapper[4895]: I1206 07:23:10.620502 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac89f8a-b835-4356-af05-a02cd8f079ea-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.733616 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5966d87587-m8dxc"] Dec 06 07:23:11 crc kubenswrapper[4895]: E1206 07:23:11.734388 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" containerName="dnsmasq-dns" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.734406 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" containerName="dnsmasq-dns" Dec 06 07:23:11 crc kubenswrapper[4895]: E1206 07:23:11.734417 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac89f8a-b835-4356-af05-a02cd8f079ea" containerName="keystone-db-sync" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.734423 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac89f8a-b835-4356-af05-a02cd8f079ea" containerName="keystone-db-sync" Dec 06 07:23:11 crc kubenswrapper[4895]: E1206 07:23:11.734439 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" containerName="init" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.734447 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" containerName="init" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.734695 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac89f8a-b835-4356-af05-a02cd8f079ea" containerName="keystone-db-sync" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.734729 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b833aa-beae-480b-9299-c9ad31acafd7" containerName="dnsmasq-dns" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.735649 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.745300 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-nb\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.745385 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmpvv\" (UniqueName: \"kubernetes.io/projected/8f3a5505-0117-40ea-821a-f129153f05bb-kube-api-access-zmpvv\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.745537 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-sb\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.745570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-config\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.745612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-dns-svc\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.759738 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zbkmc"] Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.761502 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.765863 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s6vzn" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.766221 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.766449 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.766639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.766793 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.767602 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5966d87587-m8dxc"] Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.792670 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zbkmc"] Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.847830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-sb\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.847888 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-config\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.847932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-dns-svc\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.847957 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-nb\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.848000 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmpvv\" (UniqueName: \"kubernetes.io/projected/8f3a5505-0117-40ea-821a-f129153f05bb-kube-api-access-zmpvv\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.850183 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-sb\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.850686 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-dns-svc\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.850797 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-config\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.851564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-nb\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.891616 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmpvv\" (UniqueName: \"kubernetes.io/projected/8f3a5505-0117-40ea-821a-f129153f05bb-kube-api-access-zmpvv\") pod \"dnsmasq-dns-5966d87587-m8dxc\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.941116 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4cfbl"] Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.948249 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.951379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbj7t\" (UniqueName: \"kubernetes.io/projected/d69672fa-5b02-4df8-b1e7-e552d31f7465-kube-api-access-pbj7t\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.951488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-fernet-keys\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.951517 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-config\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.955517 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.955807 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sb28w" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.956003 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.957014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmqb\" (UniqueName: \"kubernetes.io/projected/85a8313b-5768-450d-bf40-3a3197e9b03f-kube-api-access-8rmqb\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.957361 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-config-data\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.957437 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-scripts\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.957511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-credential-keys\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.957615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-combined-ca-bundle\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.957753 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-combined-ca-bundle\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:11 crc kubenswrapper[4895]: I1206 07:23:11.975673 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4cfbl"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.087950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbj7t\" (UniqueName: \"kubernetes.io/projected/d69672fa-5b02-4df8-b1e7-e552d31f7465-kube-api-access-pbj7t\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-fernet-keys\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088122 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-config\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmqb\" (UniqueName: \"kubernetes.io/projected/85a8313b-5768-450d-bf40-3a3197e9b03f-kube-api-access-8rmqb\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088306 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-config-data\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-scripts\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-credential-keys\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088451 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-combined-ca-bundle\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.088571 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-combined-ca-bundle\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.091879 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.104258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-scripts\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.109926 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-config\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.116170 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-combined-ca-bundle\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.117161 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-credential-keys\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.118136 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-config-data\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.120221 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-fernet-keys\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.135309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-combined-ca-bundle\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.156355 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmqb\" (UniqueName: \"kubernetes.io/projected/85a8313b-5768-450d-bf40-3a3197e9b03f-kube-api-access-8rmqb\") pod \"neutron-db-sync-4cfbl\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.166123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbj7t\" (UniqueName: \"kubernetes.io/projected/d69672fa-5b02-4df8-b1e7-e552d31f7465-kube-api-access-pbj7t\") pod \"keystone-bootstrap-zbkmc\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.212123 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dq2gw"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.213705 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.219896 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cr22b" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.220169 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.238360 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.301553 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-etc-machine-id\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.301911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-config-data\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.301988 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-db-sync-config-data\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.302039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-scripts\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.302097 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-combined-ca-bundle\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.302148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdrh\" (UniqueName: \"kubernetes.io/projected/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-kube-api-access-bpdrh\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.327185 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.332535 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dtbpj"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.351956 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.360968 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-44m6r" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.385935 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.396827 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.399417 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dq2gw"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.405826 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-etc-machine-id\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.405963 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-config-data\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.406011 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-db-sync-config-data\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.406050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-scripts\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.406084 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-combined-ca-bundle\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.406120 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpdrh\" (UniqueName: \"kubernetes.io/projected/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-kube-api-access-bpdrh\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.408413 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-etc-machine-id\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.425586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-scripts\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.426526 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-config-data\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.438891 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-combined-ca-bundle\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.448378 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpdrh\" (UniqueName: \"kubernetes.io/projected/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-kube-api-access-bpdrh\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.448788 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dtbpj"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.454268 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-db-sync-config-data\") pod \"cinder-db-sync-dq2gw\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.486732 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5966d87587-m8dxc"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.507781 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-db-sync-config-data\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.508225 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkb4w\" (UniqueName: \"kubernetes.io/projected/584b2782-7ea9-4697-8862-ae2090bc918c-kube-api-access-bkb4w\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.508255 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-combined-ca-bundle\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.508367 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.511452 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.519769 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.519775 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.526987 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d74777d4c-9gds5"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.528909 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.536736 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w9qg6"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.538945 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.541333 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.541549 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.541883 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jld8h" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.548252 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.590659 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.598129 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9qg6"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-scripts\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611523 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-run-httpd\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611559 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-dns-svc\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611588 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-config-data\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611602 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-log-httpd\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611635 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkb4w\" (UniqueName: \"kubernetes.io/projected/584b2782-7ea9-4697-8862-ae2090bc918c-kube-api-access-bkb4w\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-combined-ca-bundle\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611690 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-nb\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611730 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbbw\" (UniqueName: \"kubernetes.io/projected/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-kube-api-access-5rbbw\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611754 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-config\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-db-sync-config-data\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611807 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611870 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8mg\" (UniqueName: \"kubernetes.io/projected/8612fe14-b4a6-4626-b49c-fa9ea22367ae-kube-api-access-nl8mg\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.611893 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.625159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-db-sync-config-data\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.629338 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-combined-ca-bundle\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.648943 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkb4w\" (UniqueName: \"kubernetes.io/projected/584b2782-7ea9-4697-8862-ae2090bc918c-kube-api-access-bkb4w\") pod \"barbican-db-sync-dtbpj\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.654049 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d74777d4c-9gds5"] Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713617 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl8mg\" (UniqueName: \"kubernetes.io/projected/8612fe14-b4a6-4626-b49c-fa9ea22367ae-kube-api-access-nl8mg\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713717 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713745 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-scripts\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-run-httpd\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713859 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6a838b-c173-4455-b8b5-b152aeee463a-logs\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713878 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-dns-svc\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-combined-ca-bundle\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713916 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-config-data\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-config-data\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-log-httpd\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.713996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-nb\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.714012 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-scripts\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.714039 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rbbw\" (UniqueName: \"kubernetes.io/projected/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-kube-api-access-5rbbw\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.714058 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-config\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.714080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ftz\" (UniqueName: \"kubernetes.io/projected/fb6a838b-c173-4455-b8b5-b152aeee463a-kube-api-access-87ftz\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.714111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.715091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-run-httpd\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.715338 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-dns-svc\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.715679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-log-httpd\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.716076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-nb\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.716946 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-config\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.721617 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.723068 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-scripts\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.725840 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-config-data\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.726395 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.734407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.738299 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rbbw\" (UniqueName: \"kubernetes.io/projected/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-kube-api-access-5rbbw\") pod \"dnsmasq-dns-d74777d4c-9gds5\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.741539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl8mg\" (UniqueName: \"kubernetes.io/projected/8612fe14-b4a6-4626-b49c-fa9ea22367ae-kube-api-access-nl8mg\") pod \"ceilometer-0\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.807917 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.823955 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6a838b-c173-4455-b8b5-b152aeee463a-logs\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.824010 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-combined-ca-bundle\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.824045 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-config-data\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.824131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-scripts\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.824199 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87ftz\" (UniqueName: \"kubernetes.io/projected/fb6a838b-c173-4455-b8b5-b152aeee463a-kube-api-access-87ftz\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.824878 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6a838b-c173-4455-b8b5-b152aeee463a-logs\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.872308 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-scripts\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.872506 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-config-data\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.873332 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-combined-ca-bundle\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.892234 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.905813 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87ftz\" (UniqueName: \"kubernetes.io/projected/fb6a838b-c173-4455-b8b5-b152aeee463a-kube-api-access-87ftz\") pod \"placement-db-sync-w9qg6\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.931935 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:12 crc kubenswrapper[4895]: I1206 07:23:12.943072 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9qg6" Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.035912 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5966d87587-m8dxc"] Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.055045 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4cfbl"] Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.223405 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dq2gw"] Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.237570 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zbkmc"] Dec 06 07:23:13 crc kubenswrapper[4895]: W1206 07:23:13.310495 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69672fa_5b02_4df8_b1e7_e552d31f7465.slice/crio-f5086f6c8e7037d6988231003b5ce65cecde8f6cbb51917ca3a21edbe09631de WatchSource:0}: Error finding container f5086f6c8e7037d6988231003b5ce65cecde8f6cbb51917ca3a21edbe09631de: Status 404 returned error can't find the container with id f5086f6c8e7037d6988231003b5ce65cecde8f6cbb51917ca3a21edbe09631de Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.647680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zbkmc" event={"ID":"d69672fa-5b02-4df8-b1e7-e552d31f7465","Type":"ContainerStarted","Data":"4b9c441e33f352f90714fb491f5e1a626a0ddfe9b8f6d5fc8e38bf0b4cad0cbb"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.648018 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zbkmc" event={"ID":"d69672fa-5b02-4df8-b1e7-e552d31f7465","Type":"ContainerStarted","Data":"f5086f6c8e7037d6988231003b5ce65cecde8f6cbb51917ca3a21edbe09631de"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.670740 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" event={"ID":"8f3a5505-0117-40ea-821a-f129153f05bb","Type":"ContainerStarted","Data":"89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.670810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" event={"ID":"8f3a5505-0117-40ea-821a-f129153f05bb","Type":"ContainerStarted","Data":"7a5e1957f5d11a4a5b8b6c82abbdaca8df1a1604fadb436e3048d135eca146a3"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.670994 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" podUID="8f3a5505-0117-40ea-821a-f129153f05bb" containerName="init" containerID="cri-o://89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630" gracePeriod=10 Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.684948 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4cfbl" event={"ID":"85a8313b-5768-450d-bf40-3a3197e9b03f","Type":"ContainerStarted","Data":"43dc6067180e3f65623c69b4994dd075ce8e1c72869263fa6d55a6b7dce89050"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.684992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4cfbl" event={"ID":"85a8313b-5768-450d-bf40-3a3197e9b03f","Type":"ContainerStarted","Data":"654fc6c9c58e364e1bdc58968671032749fe4b925e9226fddbd2c491d84b8d5c"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.685982 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dtbpj"] Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.687031 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zbkmc" podStartSLOduration=2.6870176470000002 podStartE2EDuration="2.687017647s" podCreationTimestamp="2025-12-06 07:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:23:13.681015646 +0000 UTC m=+1556.082404526" watchObservedRunningTime="2025-12-06 07:23:13.687017647 +0000 UTC m=+1556.088406517" Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.687665 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dq2gw" event={"ID":"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4","Type":"ContainerStarted","Data":"c23d7a83e413f68f3cc994f343d9d4e9347ba4aa670aea532b3e091451728191"} Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.749494 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9qg6"] Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.759013 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4cfbl" podStartSLOduration=2.758992582 podStartE2EDuration="2.758992582s" podCreationTimestamp="2025-12-06 07:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:23:13.731058971 +0000 UTC m=+1556.132447841" watchObservedRunningTime="2025-12-06 07:23:13.758992582 +0000 UTC m=+1556.160381452" Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.782073 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d74777d4c-9gds5"] Dec 06 07:23:13 crc kubenswrapper[4895]: I1206 07:23:13.791732 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.117602 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.197531 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-nb\") pod \"8f3a5505-0117-40ea-821a-f129153f05bb\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.197633 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmpvv\" (UniqueName: \"kubernetes.io/projected/8f3a5505-0117-40ea-821a-f129153f05bb-kube-api-access-zmpvv\") pod \"8f3a5505-0117-40ea-821a-f129153f05bb\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.197694 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-config\") pod \"8f3a5505-0117-40ea-821a-f129153f05bb\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.197752 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-sb\") pod \"8f3a5505-0117-40ea-821a-f129153f05bb\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.197828 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-dns-svc\") pod \"8f3a5505-0117-40ea-821a-f129153f05bb\" (UID: \"8f3a5505-0117-40ea-821a-f129153f05bb\") " Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.212732 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3a5505-0117-40ea-821a-f129153f05bb-kube-api-access-zmpvv" (OuterVolumeSpecName: "kube-api-access-zmpvv") pod "8f3a5505-0117-40ea-821a-f129153f05bb" (UID: "8f3a5505-0117-40ea-821a-f129153f05bb"). InnerVolumeSpecName "kube-api-access-zmpvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.231140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f3a5505-0117-40ea-821a-f129153f05bb" (UID: "8f3a5505-0117-40ea-821a-f129153f05bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.242800 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f3a5505-0117-40ea-821a-f129153f05bb" (UID: "8f3a5505-0117-40ea-821a-f129153f05bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.244893 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-config" (OuterVolumeSpecName: "config") pod "8f3a5505-0117-40ea-821a-f129153f05bb" (UID: "8f3a5505-0117-40ea-821a-f129153f05bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.254954 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f3a5505-0117-40ea-821a-f129153f05bb" (UID: "8f3a5505-0117-40ea-821a-f129153f05bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.300784 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.300830 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmpvv\" (UniqueName: \"kubernetes.io/projected/8f3a5505-0117-40ea-821a-f129153f05bb-kube-api-access-zmpvv\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.300845 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.300855 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.300866 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3a5505-0117-40ea-821a-f129153f05bb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.435819 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.703851 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.703906 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" event={"ID":"8f3a5505-0117-40ea-821a-f129153f05bb","Type":"ContainerDied","Data":"89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.703997 4895 scope.go:117] "RemoveContainer" containerID="89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.703755 4895 generic.go:334] "Generic (PLEG): container finished" podID="8f3a5505-0117-40ea-821a-f129153f05bb" containerID="89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630" exitCode=0 Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.709145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5966d87587-m8dxc" event={"ID":"8f3a5505-0117-40ea-821a-f129153f05bb","Type":"ContainerDied","Data":"7a5e1957f5d11a4a5b8b6c82abbdaca8df1a1604fadb436e3048d135eca146a3"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.711826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtbpj" event={"ID":"584b2782-7ea9-4697-8862-ae2090bc918c","Type":"ContainerStarted","Data":"c9b38b48ae9c51cf88e0646e2e2bb19b6b9810f9f9c256599972b77e8299b260"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.721918 4895 generic.go:334] "Generic (PLEG): container finished" podID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerID="7162eed14345b5896e71c640a1a0647360a04a84a279e78a3bd9c4bc7cfddece" exitCode=0 Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.723461 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" event={"ID":"ceac5b30-b9f1-41ff-b5a5-509f785d7cac","Type":"ContainerDied","Data":"7162eed14345b5896e71c640a1a0647360a04a84a279e78a3bd9c4bc7cfddece"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.723587 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" event={"ID":"ceac5b30-b9f1-41ff-b5a5-509f785d7cac","Type":"ContainerStarted","Data":"ea0b1d53bfc4796ad81642a790591bf4dc53fd32922e3b471518588e7baf6a0d"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.728493 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerStarted","Data":"02cd6f9559a6fbd98638a7c4582ce572a0eb36c01f7f8c575487d4fd247d00f8"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.730897 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9qg6" event={"ID":"fb6a838b-c173-4455-b8b5-b152aeee463a","Type":"ContainerStarted","Data":"9402c416985496da561615671194275a5640eb7d893ec8f646f9c4e51274095c"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.733086 4895 generic.go:334] "Generic (PLEG): container finished" podID="4f394cd1-aa14-48fa-8643-30d896f0823e" containerID="af1f9c0dc7c7f332389b729b0dbd7801804e28f7a5866ffe130f4e337a03960e" exitCode=0 Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.734556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8ntsw" event={"ID":"4f394cd1-aa14-48fa-8643-30d896f0823e","Type":"ContainerDied","Data":"af1f9c0dc7c7f332389b729b0dbd7801804e28f7a5866ffe130f4e337a03960e"} Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.736668 4895 scope.go:117] "RemoveContainer" containerID="89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630" Dec 06 07:23:14 crc kubenswrapper[4895]: E1206 07:23:14.737148 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630\": container with ID starting with 89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630 not found: ID does not exist" containerID="89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.737215 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630"} err="failed to get container status \"89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630\": rpc error: code = NotFound desc = could not find container \"89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630\": container with ID starting with 89443cac557e77fc4bc3ff7c2ab374abcf1a76e666f3712ae6cea2372f891630 not found: ID does not exist" Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.798588 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5966d87587-m8dxc"] Dec 06 07:23:14 crc kubenswrapper[4895]: I1206 07:23:14.827577 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5966d87587-m8dxc"] Dec 06 07:23:15 crc kubenswrapper[4895]: I1206 07:23:15.233938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:23:15 crc kubenswrapper[4895]: I1206 07:23:15.247435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"swift-storage-0\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " pod="openstack/swift-storage-0" Dec 06 07:23:15 crc kubenswrapper[4895]: I1206 07:23:15.258367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 07:23:15 crc kubenswrapper[4895]: I1206 07:23:15.746987 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" event={"ID":"ceac5b30-b9f1-41ff-b5a5-509f785d7cac","Type":"ContainerStarted","Data":"4acb2e21647a253e335fb76927f109836b46ab75fcf4699fe7bfc3cbaa847494"} Dec 06 07:23:15 crc kubenswrapper[4895]: I1206 07:23:15.747178 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.089379 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3a5505-0117-40ea-821a-f129153f05bb" path="/var/lib/kubelet/pods/8f3a5505-0117-40ea-821a-f129153f05bb/volumes" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.127526 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" podStartSLOduration=4.1274563220000005 podStartE2EDuration="4.127456322s" podCreationTimestamp="2025-12-06 07:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:23:15.775603168 +0000 UTC m=+1558.176992038" watchObservedRunningTime="2025-12-06 07:23:16.127456322 +0000 UTC m=+1558.528845192" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.138219 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 06 07:23:16 crc kubenswrapper[4895]: W1206 07:23:16.149757 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-ac8270e5055231150a31c4b897240a38a8753257ad1e267cf066326afe6f8ce5 WatchSource:0}: Error finding container ac8270e5055231150a31c4b897240a38a8753257ad1e267cf066326afe6f8ce5: Status 404 returned error can't find the container with id ac8270e5055231150a31c4b897240a38a8753257ad1e267cf066326afe6f8ce5 Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.232943 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.315973 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-scripts\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.316223 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-dispersionconf\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.316295 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.316492 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4f394cd1-aa14-48fa-8643-30d896f0823e-etc-swift\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.316539 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-ring-data-devices\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.316610 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-626gg\" (UniqueName: \"kubernetes.io/projected/4f394cd1-aa14-48fa-8643-30d896f0823e-kube-api-access-626gg\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.316631 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.317691 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.318300 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f394cd1-aa14-48fa-8643-30d896f0823e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.324701 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f394cd1-aa14-48fa-8643-30d896f0823e-kube-api-access-626gg" (OuterVolumeSpecName: "kube-api-access-626gg") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "kube-api-access-626gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.330430 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.428389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.442658 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:16 crc kubenswrapper[4895]: W1206 07:23:16.443307 4895 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4f394cd1-aa14-48fa-8643-30d896f0823e/volumes/kubernetes.io~secret/combined-ca-bundle Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.443344 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.443580 4895 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.443616 4895 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4f394cd1-aa14-48fa-8643-30d896f0823e-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.443628 4895 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.443640 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-626gg\" (UniqueName: \"kubernetes.io/projected/4f394cd1-aa14-48fa-8643-30d896f0823e-kube-api-access-626gg\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.443652 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:16 crc kubenswrapper[4895]: E1206 07:23:16.476607 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf podName:4f394cd1-aa14-48fa-8643-30d896f0823e nodeName:}" failed. No retries permitted until 2025-12-06 07:23:16.976575803 +0000 UTC m=+1559.377964673 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "swiftconf" (UniqueName: "kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e") : error deleting /var/lib/kubelet/pods/4f394cd1-aa14-48fa-8643-30d896f0823e/volume-subpaths: remove /var/lib/kubelet/pods/4f394cd1-aa14-48fa-8643-30d896f0823e/volume-subpaths: no such file or directory Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.477092 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-scripts" (OuterVolumeSpecName: "scripts") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.547941 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f394cd1-aa14-48fa-8643-30d896f0823e-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.768319 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8ntsw" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.768452 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8ntsw" event={"ID":"4f394cd1-aa14-48fa-8643-30d896f0823e","Type":"ContainerDied","Data":"d2dc089a26e4475d743a05e2db0138fe013482625cb127740a9b870c7c71c3e5"} Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.768551 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2dc089a26e4475d743a05e2db0138fe013482625cb127740a9b870c7c71c3e5" Dec 06 07:23:16 crc kubenswrapper[4895]: I1206 07:23:16.771316 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"ac8270e5055231150a31c4b897240a38a8753257ad1e267cf066326afe6f8ce5"} Dec 06 07:23:17 crc kubenswrapper[4895]: I1206 07:23:17.063791 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf\") pod \"4f394cd1-aa14-48fa-8643-30d896f0823e\" (UID: \"4f394cd1-aa14-48fa-8643-30d896f0823e\") " Dec 06 07:23:17 crc kubenswrapper[4895]: I1206 07:23:17.469958 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4f394cd1-aa14-48fa-8643-30d896f0823e" (UID: "4f394cd1-aa14-48fa-8643-30d896f0823e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:17 crc kubenswrapper[4895]: I1206 07:23:17.484272 4895 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4f394cd1-aa14-48fa-8643-30d896f0823e-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:19 crc kubenswrapper[4895]: E1206 07:23:19.860110 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69672fa_5b02_4df8_b1e7_e552d31f7465.slice/crio-4b9c441e33f352f90714fb491f5e1a626a0ddfe9b8f6d5fc8e38bf0b4cad0cbb.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:23:20 crc kubenswrapper[4895]: I1206 07:23:20.837146 4895 generic.go:334] "Generic (PLEG): container finished" podID="d69672fa-5b02-4df8-b1e7-e552d31f7465" containerID="4b9c441e33f352f90714fb491f5e1a626a0ddfe9b8f6d5fc8e38bf0b4cad0cbb" exitCode=0 Dec 06 07:23:20 crc kubenswrapper[4895]: I1206 07:23:20.837441 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zbkmc" event={"ID":"d69672fa-5b02-4df8-b1e7-e552d31f7465","Type":"ContainerDied","Data":"4b9c441e33f352f90714fb491f5e1a626a0ddfe9b8f6d5fc8e38bf0b4cad0cbb"} Dec 06 07:23:22 crc kubenswrapper[4895]: I1206 07:23:22.933751 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:23:22 crc kubenswrapper[4895]: I1206 07:23:22.992309 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-skwcf"] Dec 06 07:23:22 crc kubenswrapper[4895]: I1206 07:23:22.992586 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" containerID="cri-o://86f16c170b2f21bbb35557008ca31ed57aef081af845e400495674322552d2a5" gracePeriod=10 Dec 06 07:23:24 crc kubenswrapper[4895]: I1206 07:23:24.882224 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerID="86f16c170b2f21bbb35557008ca31ed57aef081af845e400495674322552d2a5" exitCode=0 Dec 06 07:23:24 crc kubenswrapper[4895]: I1206 07:23:24.882327 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-skwcf" event={"ID":"1ec3bc59-2ad7-4af0-837b-61e1254a50f7","Type":"ContainerDied","Data":"86f16c170b2f21bbb35557008ca31ed57aef081af845e400495674322552d2a5"} Dec 06 07:23:27 crc kubenswrapper[4895]: I1206 07:23:27.252813 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Dec 06 07:23:29 crc kubenswrapper[4895]: I1206 07:23:29.696977 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:23:29 crc kubenswrapper[4895]: I1206 07:23:29.697324 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:23:29 crc kubenswrapper[4895]: I1206 07:23:29.697372 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:23:29 crc kubenswrapper[4895]: I1206 07:23:29.698578 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:23:29 crc kubenswrapper[4895]: I1206 07:23:29.698903 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" gracePeriod=600 Dec 06 07:23:31 crc kubenswrapper[4895]: I1206 07:23:31.947393 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" exitCode=0 Dec 06 07:23:31 crc kubenswrapper[4895]: I1206 07:23:31.947789 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115"} Dec 06 07:23:31 crc kubenswrapper[4895]: I1206 07:23:31.947861 4895 scope.go:117] "RemoveContainer" containerID="6ccc9113d0ff0776606793bc5166b14a5fc6157da50c2a82c90bf46796771601" Dec 06 07:23:32 crc kubenswrapper[4895]: I1206 07:23:32.251639 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Dec 06 07:23:37 crc kubenswrapper[4895]: I1206 07:23:37.252151 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Dec 06 07:23:37 crc kubenswrapper[4895]: I1206 07:23:37.252888 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.834768 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.998405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbj7t\" (UniqueName: \"kubernetes.io/projected/d69672fa-5b02-4df8-b1e7-e552d31f7465-kube-api-access-pbj7t\") pod \"d69672fa-5b02-4df8-b1e7-e552d31f7465\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.998491 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-scripts\") pod \"d69672fa-5b02-4df8-b1e7-e552d31f7465\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.998623 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-credential-keys\") pod \"d69672fa-5b02-4df8-b1e7-e552d31f7465\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.998694 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-combined-ca-bundle\") pod \"d69672fa-5b02-4df8-b1e7-e552d31f7465\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.998731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-fernet-keys\") pod \"d69672fa-5b02-4df8-b1e7-e552d31f7465\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " Dec 06 07:23:41 crc kubenswrapper[4895]: I1206 07:23:41.998765 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-config-data\") pod \"d69672fa-5b02-4df8-b1e7-e552d31f7465\" (UID: \"d69672fa-5b02-4df8-b1e7-e552d31f7465\") " Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.009673 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d69672fa-5b02-4df8-b1e7-e552d31f7465" (UID: "d69672fa-5b02-4df8-b1e7-e552d31f7465"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.021004 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-scripts" (OuterVolumeSpecName: "scripts") pod "d69672fa-5b02-4df8-b1e7-e552d31f7465" (UID: "d69672fa-5b02-4df8-b1e7-e552d31f7465"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.029352 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d69672fa-5b02-4df8-b1e7-e552d31f7465" (UID: "d69672fa-5b02-4df8-b1e7-e552d31f7465"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.030606 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69672fa-5b02-4df8-b1e7-e552d31f7465-kube-api-access-pbj7t" (OuterVolumeSpecName: "kube-api-access-pbj7t") pod "d69672fa-5b02-4df8-b1e7-e552d31f7465" (UID: "d69672fa-5b02-4df8-b1e7-e552d31f7465"). InnerVolumeSpecName "kube-api-access-pbj7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.032184 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-config-data" (OuterVolumeSpecName: "config-data") pod "d69672fa-5b02-4df8-b1e7-e552d31f7465" (UID: "d69672fa-5b02-4df8-b1e7-e552d31f7465"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.033920 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d69672fa-5b02-4df8-b1e7-e552d31f7465" (UID: "d69672fa-5b02-4df8-b1e7-e552d31f7465"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:23:42 crc kubenswrapper[4895]: E1206 07:23:42.044108 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2" Dec 06 07:23:42 crc kubenswrapper[4895]: E1206 07:23:42.044318 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpdrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dq2gw_openstack(f4b37f8f-5d15-4d1b-aab9-c4852295dcd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:23:42 crc kubenswrapper[4895]: E1206 07:23:42.046960 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dq2gw" podUID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.074530 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zbkmc" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.081358 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zbkmc" event={"ID":"d69672fa-5b02-4df8-b1e7-e552d31f7465","Type":"ContainerDied","Data":"f5086f6c8e7037d6988231003b5ce65cecde8f6cbb51917ca3a21edbe09631de"} Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.081415 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5086f6c8e7037d6988231003b5ce65cecde8f6cbb51917ca3a21edbe09631de" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.101639 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbj7t\" (UniqueName: \"kubernetes.io/projected/d69672fa-5b02-4df8-b1e7-e552d31f7465-kube-api-access-pbj7t\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.101676 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.101686 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.101695 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.101703 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.101713 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69672fa-5b02-4df8-b1e7-e552d31f7465-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.940582 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zbkmc"] Dec 06 07:23:42 crc kubenswrapper[4895]: I1206 07:23:42.950377 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zbkmc"] Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028167 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mbg9p"] Dec 06 07:23:43 crc kubenswrapper[4895]: E1206 07:23:43.028645 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f394cd1-aa14-48fa-8643-30d896f0823e" containerName="swift-ring-rebalance" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028662 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f394cd1-aa14-48fa-8643-30d896f0823e" containerName="swift-ring-rebalance" Dec 06 07:23:43 crc kubenswrapper[4895]: E1206 07:23:43.028673 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69672fa-5b02-4df8-b1e7-e552d31f7465" containerName="keystone-bootstrap" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028679 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69672fa-5b02-4df8-b1e7-e552d31f7465" containerName="keystone-bootstrap" Dec 06 07:23:43 crc kubenswrapper[4895]: E1206 07:23:43.028694 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3a5505-0117-40ea-821a-f129153f05bb" containerName="init" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028707 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3a5505-0117-40ea-821a-f129153f05bb" containerName="init" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028883 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3a5505-0117-40ea-821a-f129153f05bb" containerName="init" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028898 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69672fa-5b02-4df8-b1e7-e552d31f7465" containerName="keystone-bootstrap" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.028907 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f394cd1-aa14-48fa-8643-30d896f0823e" containerName="swift-ring-rebalance" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.029605 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.031768 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.032197 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.032274 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.032654 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.035652 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s6vzn" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.039409 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mbg9p"] Dec 06 07:23:43 crc kubenswrapper[4895]: E1206 07:23:43.083180 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2\\\"\"" pod="openstack/cinder-db-sync-dq2gw" podUID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.130910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-combined-ca-bundle\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.130981 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-config-data\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.131816 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jstx\" (UniqueName: \"kubernetes.io/projected/ea52d6be-93a0-445d-86a9-1061722b36b1-kube-api-access-5jstx\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.132000 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-fernet-keys\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.132104 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-credential-keys\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.132183 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-scripts\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.233761 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-scripts\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.233829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-combined-ca-bundle\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.233862 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-config-data\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.234105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jstx\" (UniqueName: \"kubernetes.io/projected/ea52d6be-93a0-445d-86a9-1061722b36b1-kube-api-access-5jstx\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.234140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-fernet-keys\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.234173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-credential-keys\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.239498 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-scripts\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.239927 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-fernet-keys\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.240410 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-combined-ca-bundle\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.241810 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-credential-keys\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.242421 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-config-data\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.255693 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jstx\" (UniqueName: \"kubernetes.io/projected/ea52d6be-93a0-445d-86a9-1061722b36b1-kube-api-access-5jstx\") pod \"keystone-bootstrap-mbg9p\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:43 crc kubenswrapper[4895]: I1206 07:23:43.358415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:23:44 crc kubenswrapper[4895]: I1206 07:23:44.060825 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69672fa-5b02-4df8-b1e7-e552d31f7465" path="/var/lib/kubelet/pods/d69672fa-5b02-4df8-b1e7-e552d31f7465/volumes" Dec 06 07:23:44 crc kubenswrapper[4895]: E1206 07:23:44.454347 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b" Dec 06 07:23:44 crc kubenswrapper[4895]: E1206 07:23:44.454571 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87ftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-w9qg6_openstack(fb6a838b-c173-4455-b8b5-b152aeee463a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:23:44 crc kubenswrapper[4895]: E1206 07:23:44.456612 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-w9qg6" podUID="fb6a838b-c173-4455-b8b5-b152aeee463a" Dec 06 07:23:45 crc kubenswrapper[4895]: E1206 07:23:45.098387 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b\\\"\"" pod="openstack/placement-db-sync-w9qg6" podUID="fb6a838b-c173-4455-b8b5-b152aeee463a" Dec 06 07:23:47 crc kubenswrapper[4895]: I1206 07:23:47.252378 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Dec 06 07:23:49 crc kubenswrapper[4895]: E1206 07:23:49.157765 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.271245 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.455886 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-nb\") pod \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.456008 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-dns-svc\") pod \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.456130 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-config\") pod \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.456199 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-sb\") pod \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.456234 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w68x\" (UniqueName: \"kubernetes.io/projected/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-kube-api-access-8w68x\") pod \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\" (UID: \"1ec3bc59-2ad7-4af0-837b-61e1254a50f7\") " Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.467769 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-kube-api-access-8w68x" (OuterVolumeSpecName: "kube-api-access-8w68x") pod "1ec3bc59-2ad7-4af0-837b-61e1254a50f7" (UID: "1ec3bc59-2ad7-4af0-837b-61e1254a50f7"). InnerVolumeSpecName "kube-api-access-8w68x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.504025 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ec3bc59-2ad7-4af0-837b-61e1254a50f7" (UID: "1ec3bc59-2ad7-4af0-837b-61e1254a50f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.508644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ec3bc59-2ad7-4af0-837b-61e1254a50f7" (UID: "1ec3bc59-2ad7-4af0-837b-61e1254a50f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.510117 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-config" (OuterVolumeSpecName: "config") pod "1ec3bc59-2ad7-4af0-837b-61e1254a50f7" (UID: "1ec3bc59-2ad7-4af0-837b-61e1254a50f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.511850 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ec3bc59-2ad7-4af0-837b-61e1254a50f7" (UID: "1ec3bc59-2ad7-4af0-837b-61e1254a50f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.560011 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.560090 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.560111 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w68x\" (UniqueName: \"kubernetes.io/projected/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-kube-api-access-8w68x\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.560124 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:49 crc kubenswrapper[4895]: I1206 07:23:49.560136 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ec3bc59-2ad7-4af0-837b-61e1254a50f7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:49 crc kubenswrapper[4895]: E1206 07:23:49.660797 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-account@sha256:4eb3a9c95f57df34ab88b952d8ad2057d60ac0aa4526a51070bea5d64e3aeeee" Dec 06 07:23:49 crc kubenswrapper[4895]: E1206 07:23:49.660985 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:account-server,Image:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:4eb3a9c95f57df34ab88b952d8ad2057d60ac0aa4526a51070bea5d64e3aeeee,Command:[/usr/bin/swift-account-server /etc/swift/account-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:account,HostPort:0,ContainerPort:6202,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h56h9dh94h67bh697h95h55hbh555h556h675h5fdh57dh579h5fbh64fh5c9h687hb6h678h5d4h549h54h98h8ch564h5bh5bch55dhc8hf8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pcs95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(43a2bfd7-f0c6-4b55-b629-2e11d6b45a42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:23:50 crc kubenswrapper[4895]: I1206 07:23:50.143229 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-skwcf" Dec 06 07:23:50 crc kubenswrapper[4895]: I1206 07:23:50.143410 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:23:50 crc kubenswrapper[4895]: E1206 07:23:50.143717 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:23:50 crc kubenswrapper[4895]: I1206 07:23:50.143855 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-skwcf" event={"ID":"1ec3bc59-2ad7-4af0-837b-61e1254a50f7","Type":"ContainerDied","Data":"98eac78853b58d74f4806f07f704a670e6b774b21f6646d5f20fe14c4aa91388"} Dec 06 07:23:50 crc kubenswrapper[4895]: I1206 07:23:50.227763 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-skwcf"] Dec 06 07:23:50 crc kubenswrapper[4895]: I1206 07:23:50.236211 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-skwcf"] Dec 06 07:23:50 crc kubenswrapper[4895]: E1206 07:23:50.887145 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:43a24796dabde68270dbfefa107205e173fdd6a0dc701502858cadbede69da31" Dec 06 07:23:50 crc kubenswrapper[4895]: E1206 07:23:50.887371 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:43a24796dabde68270dbfefa107205e173fdd6a0dc701502858cadbede69da31,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n578h74h96h5f7h687hfbh647hb9h9fh674hb8hffh5dch97h5cch5dfh579h557h577h5d7h596hdbh99h57hbdhcbh5dfh59ch675h58bh644h548q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nl8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8612fe14-b4a6-4626-b49c-fa9ea22367ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:23:50 crc kubenswrapper[4895]: I1206 07:23:50.983368 4895 scope.go:117] "RemoveContainer" containerID="86f16c170b2f21bbb35557008ca31ed57aef081af845e400495674322552d2a5" Dec 06 07:23:51 crc kubenswrapper[4895]: I1206 07:23:51.226718 4895 scope.go:117] "RemoveContainer" containerID="e764a75a2cd2d52795373b799ec062204ee2d9e85907436932c36e5ddfcdf6e8" Dec 06 07:23:51 crc kubenswrapper[4895]: I1206 07:23:51.492364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mbg9p"] Dec 06 07:23:51 crc kubenswrapper[4895]: W1206 07:23:51.495799 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea52d6be_93a0_445d_86a9_1061722b36b1.slice/crio-7b0df8ff397af77cf0b6a11ca50c7b64650986f7ce1612f1a30cfb5084b0e756 WatchSource:0}: Error finding container 7b0df8ff397af77cf0b6a11ca50c7b64650986f7ce1612f1a30cfb5084b0e756: Status 404 returned error can't find the container with id 7b0df8ff397af77cf0b6a11ca50c7b64650986f7ce1612f1a30cfb5084b0e756 Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.064702 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" path="/var/lib/kubelet/pods/1ec3bc59-2ad7-4af0-837b-61e1254a50f7/volumes" Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.176746 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kx92p" event={"ID":"34b001b3-7a17-444d-8dd9-5e296f84770b","Type":"ContainerStarted","Data":"7322288de69173a46c9c5d01fd459b6bd7190e029716431816a2e04cfcdda2fe"} Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.180214 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mbg9p" event={"ID":"ea52d6be-93a0-445d-86a9-1061722b36b1","Type":"ContainerStarted","Data":"10652523d65e949d4742cc50fe660d3d9ed6a9316cf28e1771a8aca093c7773a"} Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.180302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mbg9p" event={"ID":"ea52d6be-93a0-445d-86a9-1061722b36b1","Type":"ContainerStarted","Data":"7b0df8ff397af77cf0b6a11ca50c7b64650986f7ce1612f1a30cfb5084b0e756"} Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.184318 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtbpj" event={"ID":"584b2782-7ea9-4697-8862-ae2090bc918c","Type":"ContainerStarted","Data":"e0eab4e7e98f956fc203ece89fc3349c56301eb127b1a7da0e17f47ea8ecc398"} Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.203092 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kx92p" podStartSLOduration=2.687719982 podStartE2EDuration="1m2.203070081s" podCreationTimestamp="2025-12-06 07:22:50 +0000 UTC" firstStartedPulling="2025-12-06 07:22:51.487640405 +0000 UTC m=+1533.889029275" lastFinishedPulling="2025-12-06 07:23:51.002990504 +0000 UTC m=+1593.404379374" observedRunningTime="2025-12-06 07:23:52.193413001 +0000 UTC m=+1594.594801871" watchObservedRunningTime="2025-12-06 07:23:52.203070081 +0000 UTC m=+1594.604458941" Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.216953 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dtbpj" podStartSLOduration=2.953605371 podStartE2EDuration="40.216936013s" podCreationTimestamp="2025-12-06 07:23:12 +0000 UTC" firstStartedPulling="2025-12-06 07:23:13.735898941 +0000 UTC m=+1556.137287821" lastFinishedPulling="2025-12-06 07:23:50.999229593 +0000 UTC m=+1593.400618463" observedRunningTime="2025-12-06 07:23:52.214279532 +0000 UTC m=+1594.615668412" watchObservedRunningTime="2025-12-06 07:23:52.216936013 +0000 UTC m=+1594.618324883" Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.243322 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mbg9p" podStartSLOduration=9.243299222 podStartE2EDuration="9.243299222s" podCreationTimestamp="2025-12-06 07:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:23:52.236736715 +0000 UTC m=+1594.638125585" watchObservedRunningTime="2025-12-06 07:23:52.243299222 +0000 UTC m=+1594.644688092" Dec 06 07:23:52 crc kubenswrapper[4895]: I1206 07:23:52.253267 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-skwcf" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Dec 06 07:23:53 crc kubenswrapper[4895]: I1206 07:23:53.208791 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerStarted","Data":"5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0"} Dec 06 07:23:53 crc kubenswrapper[4895]: I1206 07:23:53.247511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"d88fdff0da3bb24a30ae1253952bff8962b2fd7e5173dd829fee80c77dc2670f"} Dec 06 07:23:53 crc kubenswrapper[4895]: I1206 07:23:53.247590 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"e031534957b9fccda0363a790f71a039ee246b5fbf68723177270eb631a9658b"} Dec 06 07:23:54 crc kubenswrapper[4895]: I1206 07:23:54.259248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"3ef3301fb5b94d56ebbbf77fe821db08595a72ce6dc8b57263d0355011539f31"} Dec 06 07:23:55 crc kubenswrapper[4895]: I1206 07:23:55.275203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"5390e87e60eff5498d3563e5dccff27ede47a6a293471f0a7d9c2ca23354855c"} Dec 06 07:24:04 crc kubenswrapper[4895]: I1206 07:24:04.050853 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:24:04 crc kubenswrapper[4895]: E1206 07:24:04.051731 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:24:05 crc kubenswrapper[4895]: I1206 07:24:05.371410 4895 generic.go:334] "Generic (PLEG): container finished" podID="ea52d6be-93a0-445d-86a9-1061722b36b1" containerID="10652523d65e949d4742cc50fe660d3d9ed6a9316cf28e1771a8aca093c7773a" exitCode=0 Dec 06 07:24:05 crc kubenswrapper[4895]: I1206 07:24:05.371638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mbg9p" event={"ID":"ea52d6be-93a0-445d-86a9-1061722b36b1","Type":"ContainerDied","Data":"10652523d65e949d4742cc50fe660d3d9ed6a9316cf28e1771a8aca093c7773a"} Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.031657 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.177953 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-combined-ca-bundle\") pod \"ea52d6be-93a0-445d-86a9-1061722b36b1\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.178038 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jstx\" (UniqueName: \"kubernetes.io/projected/ea52d6be-93a0-445d-86a9-1061722b36b1-kube-api-access-5jstx\") pod \"ea52d6be-93a0-445d-86a9-1061722b36b1\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.178124 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-scripts\") pod \"ea52d6be-93a0-445d-86a9-1061722b36b1\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.178209 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-fernet-keys\") pod \"ea52d6be-93a0-445d-86a9-1061722b36b1\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.178309 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-config-data\") pod \"ea52d6be-93a0-445d-86a9-1061722b36b1\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.178338 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-credential-keys\") pod \"ea52d6be-93a0-445d-86a9-1061722b36b1\" (UID: \"ea52d6be-93a0-445d-86a9-1061722b36b1\") " Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.185721 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-scripts" (OuterVolumeSpecName: "scripts") pod "ea52d6be-93a0-445d-86a9-1061722b36b1" (UID: "ea52d6be-93a0-445d-86a9-1061722b36b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.186229 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ea52d6be-93a0-445d-86a9-1061722b36b1" (UID: "ea52d6be-93a0-445d-86a9-1061722b36b1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.186920 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ea52d6be-93a0-445d-86a9-1061722b36b1" (UID: "ea52d6be-93a0-445d-86a9-1061722b36b1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.187374 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea52d6be-93a0-445d-86a9-1061722b36b1-kube-api-access-5jstx" (OuterVolumeSpecName: "kube-api-access-5jstx") pod "ea52d6be-93a0-445d-86a9-1061722b36b1" (UID: "ea52d6be-93a0-445d-86a9-1061722b36b1"). InnerVolumeSpecName "kube-api-access-5jstx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.218750 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-config-data" (OuterVolumeSpecName: "config-data") pod "ea52d6be-93a0-445d-86a9-1061722b36b1" (UID: "ea52d6be-93a0-445d-86a9-1061722b36b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.219239 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea52d6be-93a0-445d-86a9-1061722b36b1" (UID: "ea52d6be-93a0-445d-86a9-1061722b36b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.281265 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.281345 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jstx\" (UniqueName: \"kubernetes.io/projected/ea52d6be-93a0-445d-86a9-1061722b36b1-kube-api-access-5jstx\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.281362 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.281429 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.281447 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.281462 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea52d6be-93a0-445d-86a9-1061722b36b1-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.413286 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mbg9p" event={"ID":"ea52d6be-93a0-445d-86a9-1061722b36b1","Type":"ContainerDied","Data":"7b0df8ff397af77cf0b6a11ca50c7b64650986f7ce1612f1a30cfb5084b0e756"} Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.413370 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mbg9p" Dec 06 07:24:10 crc kubenswrapper[4895]: I1206 07:24:10.413373 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0df8ff397af77cf0b6a11ca50c7b64650986f7ce1612f1a30cfb5084b0e756" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.143435 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b7499d868-f7bk5"] Dec 06 07:24:11 crc kubenswrapper[4895]: E1206 07:24:11.145214 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="init" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.145317 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="init" Dec 06 07:24:11 crc kubenswrapper[4895]: E1206 07:24:11.145404 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea52d6be-93a0-445d-86a9-1061722b36b1" containerName="keystone-bootstrap" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.145459 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea52d6be-93a0-445d-86a9-1061722b36b1" containerName="keystone-bootstrap" Dec 06 07:24:11 crc kubenswrapper[4895]: E1206 07:24:11.145552 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.145634 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.154727 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec3bc59-2ad7-4af0-837b-61e1254a50f7" containerName="dnsmasq-dns" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.155062 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea52d6be-93a0-445d-86a9-1061722b36b1" containerName="keystone-bootstrap" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.156109 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.176104 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.176644 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.177081 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.177199 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.177416 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.177639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s6vzn" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.221525 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b7499d868-f7bk5"] Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.302161 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-config-data\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.302507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-public-tls-certs\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.302604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqb5\" (UniqueName: \"kubernetes.io/projected/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-kube-api-access-njqb5\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.302683 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-fernet-keys\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.302809 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-internal-tls-certs\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.302965 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-combined-ca-bundle\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.303066 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-scripts\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.303145 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-credential-keys\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-public-tls-certs\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404611 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqb5\" (UniqueName: \"kubernetes.io/projected/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-kube-api-access-njqb5\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-fernet-keys\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404721 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-internal-tls-certs\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-combined-ca-bundle\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404824 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-scripts\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-credential-keys\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.404910 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-config-data\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.410739 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-scripts\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.410996 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-credential-keys\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.412076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-public-tls-certs\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.413239 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-config-data\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.413779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-internal-tls-certs\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.415602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-fernet-keys\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.428774 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"95dadf2ac42ccfd2a7bccc9ed9a272bfdd8c736e08c731df6f9d73b086d6a880"} Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.431109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-combined-ca-bundle\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.431432 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqb5\" (UniqueName: \"kubernetes.io/projected/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-kube-api-access-njqb5\") pod \"keystone-5b7499d868-f7bk5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:11 crc kubenswrapper[4895]: I1206 07:24:11.490515 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:13 crc kubenswrapper[4895]: I1206 07:24:13.592066 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b7499d868-f7bk5"] Dec 06 07:24:13 crc kubenswrapper[4895]: W1206 07:24:13.622197 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac7118f_27ae_4b40_bf45_56fb3f3b60e5.slice/crio-66e1afc5306a7cbe67eb069657aafbd32d1a3854985b14c26af11a8b2a51fd9e WatchSource:0}: Error finding container 66e1afc5306a7cbe67eb069657aafbd32d1a3854985b14c26af11a8b2a51fd9e: Status 404 returned error can't find the container with id 66e1afc5306a7cbe67eb069657aafbd32d1a3854985b14c26af11a8b2a51fd9e Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.461626 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dq2gw" event={"ID":"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4","Type":"ContainerStarted","Data":"f86240a4c5102f3ed5bdfa5a65cd0b3f6262f647bcf567c98c4795854511e25d"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.465240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9qg6" event={"ID":"fb6a838b-c173-4455-b8b5-b152aeee463a","Type":"ContainerStarted","Data":"bdd2cdbec2e42c278fef4643e911a6317453915ea279a580dff8d34441275ca6"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.469901 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"468c36ddfa04c0375e91b38b9d03a4849fff2aae471e8fb65d8b36405a987438"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.469935 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"fa9d83bbd1bfb2f7ebd0b8374526974f2e013372bc11e48787e237b85890d529"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.469945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"9cbd0224b85c8a430882c76ef6c4ea96f23027717b65a3fd1800ebeee11b9ea6"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.469955 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"a45b30a9ac61253c7662e8944033d9348f16b13301f55a8a8a2040cd78bdd894"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.471932 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b7499d868-f7bk5" event={"ID":"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5","Type":"ContainerStarted","Data":"b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.471962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b7499d868-f7bk5" event={"ID":"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5","Type":"ContainerStarted","Data":"66e1afc5306a7cbe67eb069657aafbd32d1a3854985b14c26af11a8b2a51fd9e"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.472632 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.477706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerStarted","Data":"15e8365280a7a1f07af5533d0bf13b6b5d9b403c21c0a4ba00f07480e1290ed9"} Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.478856 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dq2gw" podStartSLOduration=2.587232996 podStartE2EDuration="1m2.478836784s" podCreationTimestamp="2025-12-06 07:23:12 +0000 UTC" firstStartedPulling="2025-12-06 07:23:13.227679435 +0000 UTC m=+1555.629068305" lastFinishedPulling="2025-12-06 07:24:13.119283223 +0000 UTC m=+1615.520672093" observedRunningTime="2025-12-06 07:24:14.476226215 +0000 UTC m=+1616.877615085" watchObservedRunningTime="2025-12-06 07:24:14.478836784 +0000 UTC m=+1616.880225654" Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.506742 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b7499d868-f7bk5" podStartSLOduration=3.506721814 podStartE2EDuration="3.506721814s" podCreationTimestamp="2025-12-06 07:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:24:14.494045863 +0000 UTC m=+1616.895434753" watchObservedRunningTime="2025-12-06 07:24:14.506721814 +0000 UTC m=+1616.908110684" Dec 06 07:24:14 crc kubenswrapper[4895]: I1206 07:24:14.519647 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w9qg6" podStartSLOduration=3.19170091 podStartE2EDuration="1m2.519621051s" podCreationTimestamp="2025-12-06 07:23:12 +0000 UTC" firstStartedPulling="2025-12-06 07:23:13.791450714 +0000 UTC m=+1556.192839594" lastFinishedPulling="2025-12-06 07:24:13.119370865 +0000 UTC m=+1615.520759735" observedRunningTime="2025-12-06 07:24:14.509925651 +0000 UTC m=+1616.911314531" watchObservedRunningTime="2025-12-06 07:24:14.519621051 +0000 UTC m=+1616.921009921" Dec 06 07:24:15 crc kubenswrapper[4895]: I1206 07:24:15.502049 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"12eb91bc2a51f4766807671be1ba08375fb07f1bfcf2b4debe6b359d5cf1ad3a"} Dec 06 07:24:18 crc kubenswrapper[4895]: I1206 07:24:18.061551 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:24:18 crc kubenswrapper[4895]: E1206 07:24:18.062536 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:24:19 crc kubenswrapper[4895]: E1206 07:24:19.462734 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"account-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:4eb3a9c95f57df34ab88b952d8ad2057d60ac0aa4526a51070bea5d64e3aeeee\\\"\", failed to \"StartContainer\" for \"account-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:4eb3a9c95f57df34ab88b952d8ad2057d60ac0aa4526a51070bea5d64e3aeeee\\\"\", failed to \"StartContainer\" for \"account-reaper\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:4eb3a9c95f57df34ab88b952d8ad2057d60ac0aa4526a51070bea5d64e3aeeee\\\"\"]" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" Dec 06 07:24:19 crc kubenswrapper[4895]: I1206 07:24:19.561849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"180fd4a822736c6399b31cb1a67003fc90408e00dbbaac62ec926a3d268825ec"} Dec 06 07:24:20 crc kubenswrapper[4895]: I1206 07:24:20.865394 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d856j"] Dec 06 07:24:20 crc kubenswrapper[4895]: I1206 07:24:20.868982 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:20 crc kubenswrapper[4895]: I1206 07:24:20.879016 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d856j"] Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.035660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-catalog-content\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.035726 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kbh\" (UniqueName: \"kubernetes.io/projected/f475253d-8773-4b65-bd71-64b349bbc141-kube-api-access-g9kbh\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.035802 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-utilities\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.137657 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-catalog-content\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.137704 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9kbh\" (UniqueName: \"kubernetes.io/projected/f475253d-8773-4b65-bd71-64b349bbc141-kube-api-access-g9kbh\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.137799 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-utilities\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.138223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-utilities\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.138261 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-catalog-content\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.169071 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9kbh\" (UniqueName: \"kubernetes.io/projected/f475253d-8773-4b65-bd71-64b349bbc141-kube-api-access-g9kbh\") pod \"redhat-operators-d856j\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:21 crc kubenswrapper[4895]: I1206 07:24:21.201216 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:25 crc kubenswrapper[4895]: I1206 07:24:25.803268 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d856j"] Dec 06 07:24:28 crc kubenswrapper[4895]: I1206 07:24:28.642815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerStarted","Data":"1b318d2386aadd6d59acceeab991cdfd130bbabe271e3f93771d2748e0e44a36"} Dec 06 07:24:29 crc kubenswrapper[4895]: I1206 07:24:29.659068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"2951f946e28728b5afab411ba269775aef6e893b3da37ae09e4c6ad9b2e2cd1d"} Dec 06 07:24:29 crc kubenswrapper[4895]: I1206 07:24:29.662019 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerStarted","Data":"a42733a61046d474ddb9592bd02c22344fd0895fa4e8fab838656f380bf4ca8a"} Dec 06 07:24:29 crc kubenswrapper[4895]: I1206 07:24:29.663970 4895 generic.go:334] "Generic (PLEG): container finished" podID="f475253d-8773-4b65-bd71-64b349bbc141" containerID="fd748ac767e0132e5e0889a8f5b6992e4af40ef30101a2514874f14fe30f865a" exitCode=0 Dec 06 07:24:29 crc kubenswrapper[4895]: I1206 07:24:29.664009 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerDied","Data":"fd748ac767e0132e5e0889a8f5b6992e4af40ef30101a2514874f14fe30f865a"} Dec 06 07:24:30 crc kubenswrapper[4895]: E1206 07:24:30.601109 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" Dec 06 07:24:30 crc kubenswrapper[4895]: I1206 07:24:30.672136 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="ceilometer-notification-agent" containerID="cri-o://5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0" gracePeriod=30 Dec 06 07:24:30 crc kubenswrapper[4895]: I1206 07:24:30.672194 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:24:30 crc kubenswrapper[4895]: I1206 07:24:30.672221 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="proxy-httpd" containerID="cri-o://a42733a61046d474ddb9592bd02c22344fd0895fa4e8fab838656f380bf4ca8a" gracePeriod=30 Dec 06 07:24:30 crc kubenswrapper[4895]: I1206 07:24:30.672275 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="sg-core" containerID="cri-o://15e8365280a7a1f07af5533d0bf13b6b5d9b403c21c0a4ba00f07480e1290ed9" gracePeriod=30 Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.051309 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:24:32 crc kubenswrapper[4895]: E1206 07:24:32.052235 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.700309 4895 generic.go:334] "Generic (PLEG): container finished" podID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerID="a42733a61046d474ddb9592bd02c22344fd0895fa4e8fab838656f380bf4ca8a" exitCode=0 Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.700353 4895 generic.go:334] "Generic (PLEG): container finished" podID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerID="15e8365280a7a1f07af5533d0bf13b6b5d9b403c21c0a4ba00f07480e1290ed9" exitCode=2 Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.700376 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerDied","Data":"a42733a61046d474ddb9592bd02c22344fd0895fa4e8fab838656f380bf4ca8a"} Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.700423 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerDied","Data":"15e8365280a7a1f07af5533d0bf13b6b5d9b403c21c0a4ba00f07480e1290ed9"} Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.709245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d"} Dec 06 07:24:32 crc kubenswrapper[4895]: I1206 07:24:32.709290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"b330db62e15f40daaac157cd4c49b8c144883337b31335984bf5592ae231a59c"} Dec 06 07:24:33 crc kubenswrapper[4895]: I1206 07:24:33.724173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerStarted","Data":"016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb"} Dec 06 07:24:33 crc kubenswrapper[4895]: I1206 07:24:33.726179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerStarted","Data":"8ffab875effe79b4719b305976c638a52f175102d3c5b5a3d24d0dbab326a722"} Dec 06 07:24:35 crc kubenswrapper[4895]: I1206 07:24:35.790629 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.876854519 podStartE2EDuration="1m53.790601556s" podCreationTimestamp="2025-12-06 07:22:42 +0000 UTC" firstStartedPulling="2025-12-06 07:23:16.180659731 +0000 UTC m=+1558.582048601" lastFinishedPulling="2025-12-06 07:24:28.094406768 +0000 UTC m=+1630.495795638" observedRunningTime="2025-12-06 07:24:35.785890809 +0000 UTC m=+1638.187279689" watchObservedRunningTime="2025-12-06 07:24:35.790601556 +0000 UTC m=+1638.191990426" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.095125 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59bfd87765-ptgqn"] Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.097018 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.103706 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.122507 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59bfd87765-ptgqn"] Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.159429 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-nb\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.159508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-config\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.159542 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-swift-storage-0\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.159578 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xdd6\" (UniqueName: \"kubernetes.io/projected/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-kube-api-access-5xdd6\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.159618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-svc\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.159645 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-sb\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.260626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-nb\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.260698 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-config\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.260746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-swift-storage-0\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.260790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xdd6\" (UniqueName: \"kubernetes.io/projected/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-kube-api-access-5xdd6\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.260835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-svc\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.260873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-sb\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.261574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-nb\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.261622 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-config\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.261677 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-swift-storage-0\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.261845 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-sb\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.261897 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-svc\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.279608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xdd6\" (UniqueName: \"kubernetes.io/projected/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-kube-api-access-5xdd6\") pod \"dnsmasq-dns-59bfd87765-ptgqn\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.420284 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:36 crc kubenswrapper[4895]: I1206 07:24:36.883356 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59bfd87765-ptgqn"] Dec 06 07:24:37 crc kubenswrapper[4895]: I1206 07:24:37.763979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" event={"ID":"d2ec0f50-582c-4fba-8d4f-c4da28576e2a","Type":"ContainerStarted","Data":"003ba2782720c555cec71a8e5f2923ce17efba7d5597974759f1aaa63d1c6ecd"} Dec 06 07:24:38 crc kubenswrapper[4895]: I1206 07:24:38.775957 4895 generic.go:334] "Generic (PLEG): container finished" podID="f475253d-8773-4b65-bd71-64b349bbc141" containerID="8ffab875effe79b4719b305976c638a52f175102d3c5b5a3d24d0dbab326a722" exitCode=0 Dec 06 07:24:38 crc kubenswrapper[4895]: I1206 07:24:38.776004 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerDied","Data":"8ffab875effe79b4719b305976c638a52f175102d3c5b5a3d24d0dbab326a722"} Dec 06 07:24:39 crc kubenswrapper[4895]: I1206 07:24:39.733679 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-zfwsz" podUID="69aac7da-152a-4314-92fd-1f4aea0140be" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:24:42 crc kubenswrapper[4895]: I1206 07:24:42.896647 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.139:3000/\": dial tcp 10.217.0.139:3000: connect: connection refused" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.029736 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.050890 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:24:44 crc kubenswrapper[4895]: E1206 07:24:44.051192 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.212277 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318228 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-run-httpd\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318300 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-config-data\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318359 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-scripts\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318420 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-combined-ca-bundle\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-sg-core-conf-yaml\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318517 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-log-httpd\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.318540 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl8mg\" (UniqueName: \"kubernetes.io/projected/8612fe14-b4a6-4626-b49c-fa9ea22367ae-kube-api-access-nl8mg\") pod \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\" (UID: \"8612fe14-b4a6-4626-b49c-fa9ea22367ae\") " Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.320682 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.320759 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.324494 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-scripts" (OuterVolumeSpecName: "scripts") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.339704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8612fe14-b4a6-4626-b49c-fa9ea22367ae-kube-api-access-nl8mg" (OuterVolumeSpecName: "kube-api-access-nl8mg") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "kube-api-access-nl8mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.344700 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.378873 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.420518 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.420700 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.420780 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.420837 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.420888 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8612fe14-b4a6-4626-b49c-fa9ea22367ae-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.420944 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl8mg\" (UniqueName: \"kubernetes.io/projected/8612fe14-b4a6-4626-b49c-fa9ea22367ae-kube-api-access-nl8mg\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.440367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-config-data" (OuterVolumeSpecName: "config-data") pod "8612fe14-b4a6-4626-b49c-fa9ea22367ae" (UID: "8612fe14-b4a6-4626-b49c-fa9ea22367ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.522725 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8612fe14-b4a6-4626-b49c-fa9ea22367ae-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.524177 4895 generic.go:334] "Generic (PLEG): container finished" podID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerID="5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0" exitCode=0 Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.524217 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerDied","Data":"5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0"} Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.524257 4895 scope.go:117] "RemoveContainer" containerID="a42733a61046d474ddb9592bd02c22344fd0895fa4e8fab838656f380bf4ca8a" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.880410 4895 scope.go:117] "RemoveContainer" containerID="15e8365280a7a1f07af5533d0bf13b6b5d9b403c21c0a4ba00f07480e1290ed9" Dec 06 07:24:44 crc kubenswrapper[4895]: I1206 07:24:44.906142 4895 scope.go:117] "RemoveContainer" containerID="5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.535977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8612fe14-b4a6-4626-b49c-fa9ea22367ae","Type":"ContainerDied","Data":"02cd6f9559a6fbd98638a7c4582ce572a0eb36c01f7f8c575487d4fd247d00f8"} Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.536065 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.538913 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" event={"ID":"d2ec0f50-582c-4fba-8d4f-c4da28576e2a","Type":"ContainerStarted","Data":"6affea60218b6a7b9f9fa4785befa681cf3a9799671151ccabc535560a0fa386"} Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.596539 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.610607 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.617961 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:24:45 crc kubenswrapper[4895]: E1206 07:24:45.618399 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="proxy-httpd" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.618422 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="proxy-httpd" Dec 06 07:24:45 crc kubenswrapper[4895]: E1206 07:24:45.618440 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="ceilometer-notification-agent" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.618450 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="ceilometer-notification-agent" Dec 06 07:24:45 crc kubenswrapper[4895]: E1206 07:24:45.618496 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="sg-core" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.618506 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="sg-core" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.618751 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="sg-core" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.618780 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="ceilometer-notification-agent" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.618800 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" containerName="proxy-httpd" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.623100 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.626380 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.626994 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.634965 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.701543 4895 scope.go:117] "RemoveContainer" containerID="5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0" Dec 06 07:24:45 crc kubenswrapper[4895]: E1206 07:24:45.702292 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0\": container with ID starting with 5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0 not found: ID does not exist" containerID="5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.702344 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0"} err="failed to get container status \"5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0\": rpc error: code = NotFound desc = could not find container \"5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0\": container with ID starting with 5e1bc792a597ee008ec861a9d107b8d35c98cd9fec3c5dd4c3164f524d6b1bc0 not found: ID does not exist" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.747283 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.747520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.747606 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.747654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.747724 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-kube-api-access-tcvcg\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.747867 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-scripts\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.748112 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-config-data\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.850787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.850887 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.850942 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-kube-api-access-tcvcg\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.851034 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-scripts\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.851100 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-config-data\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.851184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.851255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.851920 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.851972 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.857825 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.859383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-config-data\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.871435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-scripts\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.872497 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.872834 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-kube-api-access-tcvcg\") pod \"ceilometer-0\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " pod="openstack/ceilometer-0" Dec 06 07:24:45 crc kubenswrapper[4895]: I1206 07:24:45.976953 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.075667 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8612fe14-b4a6-4626-b49c-fa9ea22367ae" path="/var/lib/kubelet/pods/8612fe14-b4a6-4626-b49c-fa9ea22367ae/volumes" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.288038 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.289557 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.295661 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.295938 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-45shd" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.299659 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.301362 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.362747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.362874 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.362924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rlf\" (UniqueName: \"kubernetes.io/projected/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-kube-api-access-76rlf\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.362977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.464860 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.465029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.466002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rlf\" (UniqueName: \"kubernetes.io/projected/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-kube-api-access-76rlf\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.466071 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.467064 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.470973 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.481041 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.495592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rlf\" (UniqueName: \"kubernetes.io/projected/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-kube-api-access-76rlf\") pod \"openstackclient\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.497285 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:24:46 crc kubenswrapper[4895]: W1206 07:24:46.497683 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4b3f48d_e115_4ed9_86a2_1a9209c95e8a.slice/crio-6ea7ff4329b92e145f8bed14024aa006af4eac3ae7a10b0655e7e2e4fc4e3fb8 WatchSource:0}: Error finding container 6ea7ff4329b92e145f8bed14024aa006af4eac3ae7a10b0655e7e2e4fc4e3fb8: Status 404 returned error can't find the container with id 6ea7ff4329b92e145f8bed14024aa006af4eac3ae7a10b0655e7e2e4fc4e3fb8 Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.561598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerStarted","Data":"6ea7ff4329b92e145f8bed14024aa006af4eac3ae7a10b0655e7e2e4fc4e3fb8"} Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.562903 4895 generic.go:334] "Generic (PLEG): container finished" podID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerID="6affea60218b6a7b9f9fa4785befa681cf3a9799671151ccabc535560a0fa386" exitCode=0 Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.562930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" event={"ID":"d2ec0f50-582c-4fba-8d4f-c4da28576e2a","Type":"ContainerDied","Data":"6affea60218b6a7b9f9fa4785befa681cf3a9799671151ccabc535560a0fa386"} Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.619671 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.747197 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.755239 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.823436 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.828060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.838684 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.976361 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.976407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnxm\" (UniqueName: \"kubernetes.io/projected/cf257472-30d7-4719-9b36-47c30f1db7ec-kube-api-access-qrnxm\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.976678 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:46 crc kubenswrapper[4895]: I1206 07:24:46.976823 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.078574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.079987 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnxm\" (UniqueName: \"kubernetes.io/projected/cf257472-30d7-4719-9b36-47c30f1db7ec-kube-api-access-qrnxm\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.080505 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.081105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.081958 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.084827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.085693 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.097148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnxm\" (UniqueName: \"kubernetes.io/projected/cf257472-30d7-4719-9b36-47c30f1db7ec-kube-api-access-qrnxm\") pod \"openstackclient\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.170853 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: E1206 07:24:47.234537 4895 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 06 07:24:47 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e_0(3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3" Netns:"/var/run/netns/cd22b6c6-3d5b-4ac6-a01c-9f55c8be3995" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3;K8S_POD_UID=ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3 network default NAD default] [openstack/openstackclient 3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:93 [10.217.0.147/23] Dec 06 07:24:47 crc kubenswrapper[4895]: ' Dec 06 07:24:47 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 07:24:47 crc kubenswrapper[4895]: > Dec 06 07:24:47 crc kubenswrapper[4895]: E1206 07:24:47.234628 4895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 06 07:24:47 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e_0(3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3" Netns:"/var/run/netns/cd22b6c6-3d5b-4ac6-a01c-9f55c8be3995" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3;K8S_POD_UID=ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3 network default NAD default] [openstack/openstackclient 3199b34195f262d95d0d4d02717e04b47bf57864f4cf0d293f33e7d3ca877ac3 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:93 [10.217.0.147/23] Dec 06 07:24:47 crc kubenswrapper[4895]: ' Dec 06 07:24:47 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 07:24:47 crc kubenswrapper[4895]: > pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.460187 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 07:24:47 crc kubenswrapper[4895]: W1206 07:24:47.464084 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf257472_30d7_4719_9b36_47c30f1db7ec.slice/crio-c5b8606b1add2e847d10d6bf9a3cd1707184f4c1d925bbf3044423eae3b03546 WatchSource:0}: Error finding container c5b8606b1add2e847d10d6bf9a3cd1707184f4c1d925bbf3044423eae3b03546: Status 404 returned error can't find the container with id c5b8606b1add2e847d10d6bf9a3cd1707184f4c1d925bbf3044423eae3b03546 Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.574798 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cf257472-30d7-4719-9b36-47c30f1db7ec","Type":"ContainerStarted","Data":"c5b8606b1add2e847d10d6bf9a3cd1707184f4c1d925bbf3044423eae3b03546"} Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.577336 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.577324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" event={"ID":"d2ec0f50-582c-4fba-8d4f-c4da28576e2a","Type":"ContainerStarted","Data":"9a0f5ecc550d27cbfb83305ca5d2198b4247293591f4afde235b092cfe7429b5"} Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.580145 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.588020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.692689 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config\") pod \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.692813 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-combined-ca-bundle\") pod \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.692851 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config-secret\") pod \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.692935 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76rlf\" (UniqueName: \"kubernetes.io/projected/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-kube-api-access-76rlf\") pod \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\" (UID: \"ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e\") " Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.693597 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" (UID: "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.698681 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-kube-api-access-76rlf" (OuterVolumeSpecName: "kube-api-access-76rlf") pod "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" (UID: "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e"). InnerVolumeSpecName "kube-api-access-76rlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.702588 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" (UID: "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.702672 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" (UID: "ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.794895 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.794941 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76rlf\" (UniqueName: \"kubernetes.io/projected/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-kube-api-access-76rlf\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.794953 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:47 crc kubenswrapper[4895]: I1206 07:24:47.794962 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:48 crc kubenswrapper[4895]: I1206 07:24:48.065043 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" path="/var/lib/kubelet/pods/ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e/volumes" Dec 06 07:24:48 crc kubenswrapper[4895]: I1206 07:24:48.588156 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:24:48 crc kubenswrapper[4895]: I1206 07:24:48.588350 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:48 crc kubenswrapper[4895]: I1206 07:24:48.618505 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" podStartSLOduration=12.618463041 podStartE2EDuration="12.618463041s" podCreationTimestamp="2025-12-06 07:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:24:48.61541571 +0000 UTC m=+1651.016804590" watchObservedRunningTime="2025-12-06 07:24:48.618463041 +0000 UTC m=+1651.019851911" Dec 06 07:24:48 crc kubenswrapper[4895]: I1206 07:24:48.619138 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ff6c2a31-1ed6-4225-aba3-e86fd5a7ee1e" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" Dec 06 07:24:49 crc kubenswrapper[4895]: I1206 07:24:49.599443 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerStarted","Data":"ac26994279e6f50760492903b6fc29fd978368507e383ae68640de53ab64efe5"} Dec 06 07:24:49 crc kubenswrapper[4895]: I1206 07:24:49.605585 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerStarted","Data":"78f3a29d45d3d925d78072c3abeb5f2cf8d565c52d83bc7190619c62f136417e"} Dec 06 07:24:51 crc kubenswrapper[4895]: I1206 07:24:51.202611 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:51 crc kubenswrapper[4895]: I1206 07:24:51.202948 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:24:52 crc kubenswrapper[4895]: I1206 07:24:52.252344 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d856j" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" probeResult="failure" output=< Dec 06 07:24:52 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 07:24:52 crc kubenswrapper[4895]: > Dec 06 07:24:52 crc kubenswrapper[4895]: I1206 07:24:52.633300 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerStarted","Data":"6d62dd7c6dd08e80e39c5109c5985e815de052be26e82159b042f00ee70fce9c"} Dec 06 07:24:56 crc kubenswrapper[4895]: I1206 07:24:56.424608 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:24:56 crc kubenswrapper[4895]: I1206 07:24:56.448158 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d856j" podStartSLOduration=17.445552184 podStartE2EDuration="36.448136446s" podCreationTimestamp="2025-12-06 07:24:20 +0000 UTC" firstStartedPulling="2025-12-06 07:24:29.666218322 +0000 UTC m=+1632.067607212" lastFinishedPulling="2025-12-06 07:24:48.668802604 +0000 UTC m=+1651.070191474" observedRunningTime="2025-12-06 07:24:49.628544103 +0000 UTC m=+1652.029932993" watchObservedRunningTime="2025-12-06 07:24:56.448136446 +0000 UTC m=+1658.849525316" Dec 06 07:24:56 crc kubenswrapper[4895]: I1206 07:24:56.496919 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d74777d4c-9gds5"] Dec 06 07:24:56 crc kubenswrapper[4895]: I1206 07:24:56.497175 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="dnsmasq-dns" containerID="cri-o://4acb2e21647a253e335fb76927f109836b46ab75fcf4699fe7bfc3cbaa847494" gracePeriod=10 Dec 06 07:24:57 crc kubenswrapper[4895]: I1206 07:24:57.933089 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 06 07:24:58 crc kubenswrapper[4895]: I1206 07:24:58.056648 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:24:58 crc kubenswrapper[4895]: E1206 07:24:58.056908 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:25:01 crc kubenswrapper[4895]: I1206 07:25:01.713454 4895 generic.go:334] "Generic (PLEG): container finished" podID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerID="4acb2e21647a253e335fb76927f109836b46ab75fcf4699fe7bfc3cbaa847494" exitCode=0 Dec 06 07:25:01 crc kubenswrapper[4895]: I1206 07:25:01.713540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" event={"ID":"ceac5b30-b9f1-41ff-b5a5-509f785d7cac","Type":"ContainerDied","Data":"4acb2e21647a253e335fb76927f109836b46ab75fcf4699fe7bfc3cbaa847494"} Dec 06 07:25:02 crc kubenswrapper[4895]: I1206 07:25:02.253438 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d856j" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" probeResult="failure" output=< Dec 06 07:25:02 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 07:25:02 crc kubenswrapper[4895]: > Dec 06 07:25:02 crc kubenswrapper[4895]: I1206 07:25:02.829664 4895 patch_prober.go:28] interesting pod/route-controller-manager-758c4dfd64-2rbgp container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 07:25:02 crc kubenswrapper[4895]: I1206 07:25:02.829737 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-758c4dfd64-2rbgp" podUID="b5802056-e992-49ec-aba2-728af99f18b6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 07:25:02 crc kubenswrapper[4895]: I1206 07:25:02.933716 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 06 07:25:04 crc kubenswrapper[4895]: E1206 07:25:04.964292 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:c1b8da8298ec8be0ca22c7d8ba48da103e72dfe7ed5e9427b971d31eac3a8b33" Dec 06 07:25:04 crc kubenswrapper[4895]: E1206 07:25:04.965194 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:c1b8da8298ec8be0ca22c7d8ba48da103e72dfe7ed5e9427b971d31eac3a8b33,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch549h5cch5ffh56h565hf6hdch67ch665hf9h5c8h86h544h55h58bhd7h694h658h575hc8h674h67h5b7h59dh56fh67dhd5hc6h569h9dhbbq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrnxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(cf257472-30d7-4719-9b36-47c30f1db7ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:25:04 crc kubenswrapper[4895]: E1206 07:25:04.966812 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.046184 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.131851 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rbbw\" (UniqueName: \"kubernetes.io/projected/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-kube-api-access-5rbbw\") pod \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.132040 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-dns-svc\") pod \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.132147 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb\") pod \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.132184 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-config\") pod \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.132235 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-nb\") pod \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.140907 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-kube-api-access-5rbbw" (OuterVolumeSpecName: "kube-api-access-5rbbw") pod "ceac5b30-b9f1-41ff-b5a5-509f785d7cac" (UID: "ceac5b30-b9f1-41ff-b5a5-509f785d7cac"). InnerVolumeSpecName "kube-api-access-5rbbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.186433 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-config" (OuterVolumeSpecName: "config") pod "ceac5b30-b9f1-41ff-b5a5-509f785d7cac" (UID: "ceac5b30-b9f1-41ff-b5a5-509f785d7cac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.197705 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ceac5b30-b9f1-41ff-b5a5-509f785d7cac" (UID: "ceac5b30-b9f1-41ff-b5a5-509f785d7cac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:05 crc kubenswrapper[4895]: E1206 07:25:05.198764 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb podName:ceac5b30-b9f1-41ff-b5a5-509f785d7cac nodeName:}" failed. No retries permitted until 2025-12-06 07:25:05.698740076 +0000 UTC m=+1668.100128946 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb") pod "ceac5b30-b9f1-41ff-b5a5-509f785d7cac" (UID: "ceac5b30-b9f1-41ff-b5a5-509f785d7cac") : error deleting /var/lib/kubelet/pods/ceac5b30-b9f1-41ff-b5a5-509f785d7cac/volume-subpaths: remove /var/lib/kubelet/pods/ceac5b30-b9f1-41ff-b5a5-509f785d7cac/volume-subpaths: no such file or directory Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.199176 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ceac5b30-b9f1-41ff-b5a5-509f785d7cac" (UID: "ceac5b30-b9f1-41ff-b5a5-509f785d7cac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.234674 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.234710 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.234722 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rbbw\" (UniqueName: \"kubernetes.io/projected/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-kube-api-access-5rbbw\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.234733 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.743170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb\") pod \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\" (UID: \"ceac5b30-b9f1-41ff-b5a5-509f785d7cac\") " Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.744152 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ceac5b30-b9f1-41ff-b5a5-509f785d7cac" (UID: "ceac5b30-b9f1-41ff-b5a5-509f785d7cac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.752210 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" event={"ID":"ceac5b30-b9f1-41ff-b5a5-509f785d7cac","Type":"ContainerDied","Data":"ea0b1d53bfc4796ad81642a790591bf4dc53fd32922e3b471518588e7baf6a0d"} Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.752258 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74777d4c-9gds5" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.752270 4895 scope.go:117] "RemoveContainer" containerID="4acb2e21647a253e335fb76927f109836b46ab75fcf4699fe7bfc3cbaa847494" Dec 06 07:25:05 crc kubenswrapper[4895]: E1206 07:25:05.754713 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:c1b8da8298ec8be0ca22c7d8ba48da103e72dfe7ed5e9427b971d31eac3a8b33\\\"\"" pod="openstack/openstackclient" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.780179 4895 scope.go:117] "RemoveContainer" containerID="7162eed14345b5896e71c640a1a0647360a04a84a279e78a3bd9c4bc7cfddece" Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.803526 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d74777d4c-9gds5"] Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.813441 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d74777d4c-9gds5"] Dec 06 07:25:05 crc kubenswrapper[4895]: I1206 07:25:05.845321 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceac5b30-b9f1-41ff-b5a5-509f785d7cac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:06 crc kubenswrapper[4895]: I1206 07:25:06.066149 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" path="/var/lib/kubelet/pods/ceac5b30-b9f1-41ff-b5a5-509f785d7cac/volumes" Dec 06 07:25:06 crc kubenswrapper[4895]: I1206 07:25:06.764016 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerStarted","Data":"2415954ecad81457802f015db6daee7be33dc9246e9d07d3767c276cd679221c"} Dec 06 07:25:09 crc kubenswrapper[4895]: I1206 07:25:09.792136 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerStarted","Data":"0e998093a5744fc963d027f5d3fd2e260009a092673c70fc61ce0e4946af1f83"} Dec 06 07:25:10 crc kubenswrapper[4895]: I1206 07:25:10.801891 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:25:10 crc kubenswrapper[4895]: I1206 07:25:10.835892 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.5744668 podStartE2EDuration="25.835872317s" podCreationTimestamp="2025-12-06 07:24:45 +0000 UTC" firstStartedPulling="2025-12-06 07:24:46.500573204 +0000 UTC m=+1648.901962074" lastFinishedPulling="2025-12-06 07:25:08.761978721 +0000 UTC m=+1671.163367591" observedRunningTime="2025-12-06 07:25:10.826230208 +0000 UTC m=+1673.227619108" watchObservedRunningTime="2025-12-06 07:25:10.835872317 +0000 UTC m=+1673.237261177" Dec 06 07:25:12 crc kubenswrapper[4895]: I1206 07:25:12.051571 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:25:12 crc kubenswrapper[4895]: E1206 07:25:12.051871 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:25:12 crc kubenswrapper[4895]: I1206 07:25:12.251994 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d856j" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" probeResult="failure" output=< Dec 06 07:25:12 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 07:25:12 crc kubenswrapper[4895]: > Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.313877 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-584b845d6f-78cbh"] Dec 06 07:25:14 crc kubenswrapper[4895]: E1206 07:25:14.314606 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="init" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.314622 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="init" Dec 06 07:25:14 crc kubenswrapper[4895]: E1206 07:25:14.314662 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="dnsmasq-dns" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.314670 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="dnsmasq-dns" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.314891 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceac5b30-b9f1-41ff-b5a5-509f785d7cac" containerName="dnsmasq-dns" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.315999 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.321077 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.321080 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.321604 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.334546 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-584b845d6f-78cbh"] Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.403741 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvs8\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-kube-api-access-hqvs8\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.403787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-config-data\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.403819 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-log-httpd\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.403899 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-internal-tls-certs\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.403951 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-run-httpd\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.404023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-etc-swift\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.404073 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-public-tls-certs\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.404107 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-combined-ca-bundle\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.506618 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-etc-swift\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.506849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-public-tls-certs\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.508359 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-combined-ca-bundle\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.508445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvs8\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-kube-api-access-hqvs8\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.508526 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-config-data\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.508579 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-log-httpd\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.508785 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-internal-tls-certs\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.508846 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-run-httpd\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.509447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-log-httpd\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.509571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-run-httpd\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.519145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-config-data\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.519871 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-etc-swift\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.519392 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-internal-tls-certs\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.521197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-public-tls-certs\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.522970 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-combined-ca-bundle\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.527272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvs8\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-kube-api-access-hqvs8\") pod \"swift-proxy-584b845d6f-78cbh\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:14 crc kubenswrapper[4895]: I1206 07:25:14.752396 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:15 crc kubenswrapper[4895]: I1206 07:25:15.719685 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-584b845d6f-78cbh"] Dec 06 07:25:15 crc kubenswrapper[4895]: I1206 07:25:15.853262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-584b845d6f-78cbh" event={"ID":"09e95af5-a2ad-42ee-83a9-25cef915d0dc","Type":"ContainerStarted","Data":"6f2278c32b741a48d123950d9602d019b22c91d2122a0e9157e0ceb65ac69979"} Dec 06 07:25:17 crc kubenswrapper[4895]: I1206 07:25:17.880057 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-584b845d6f-78cbh" event={"ID":"09e95af5-a2ad-42ee-83a9-25cef915d0dc","Type":"ContainerStarted","Data":"01b7e75885151f8a28d8cb5829b199e20877666375bdf87e612aef852c6f6c46"} Dec 06 07:25:18 crc kubenswrapper[4895]: I1206 07:25:18.895243 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-584b845d6f-78cbh" event={"ID":"09e95af5-a2ad-42ee-83a9-25cef915d0dc","Type":"ContainerStarted","Data":"38bc1a720f1bec95645eb9c53e763a74280470a9f127df6f30b1613abb5aaad5"} Dec 06 07:25:20 crc kubenswrapper[4895]: I1206 07:25:20.913000 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:20 crc kubenswrapper[4895]: I1206 07:25:20.913080 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:20 crc kubenswrapper[4895]: I1206 07:25:20.957231 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-584b845d6f-78cbh" podStartSLOduration=6.9572066790000004 podStartE2EDuration="6.957206679s" podCreationTimestamp="2025-12-06 07:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:25:20.935165287 +0000 UTC m=+1683.336554157" watchObservedRunningTime="2025-12-06 07:25:20.957206679 +0000 UTC m=+1683.358595589" Dec 06 07:25:21 crc kubenswrapper[4895]: I1206 07:25:21.247365 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:25:21 crc kubenswrapper[4895]: I1206 07:25:21.298227 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:25:22 crc kubenswrapper[4895]: I1206 07:25:22.089051 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d856j"] Dec 06 07:25:22 crc kubenswrapper[4895]: I1206 07:25:22.928462 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d856j" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" containerID="cri-o://ac26994279e6f50760492903b6fc29fd978368507e383ae68640de53ab64efe5" gracePeriod=2 Dec 06 07:25:23 crc kubenswrapper[4895]: I1206 07:25:23.940024 4895 generic.go:334] "Generic (PLEG): container finished" podID="f475253d-8773-4b65-bd71-64b349bbc141" containerID="ac26994279e6f50760492903b6fc29fd978368507e383ae68640de53ab64efe5" exitCode=0 Dec 06 07:25:23 crc kubenswrapper[4895]: I1206 07:25:23.940089 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerDied","Data":"ac26994279e6f50760492903b6fc29fd978368507e383ae68640de53ab64efe5"} Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.052178 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:25:24 crc kubenswrapper[4895]: E1206 07:25:24.053005 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.434143 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.544561 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-catalog-content\") pod \"f475253d-8773-4b65-bd71-64b349bbc141\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.544627 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-utilities\") pod \"f475253d-8773-4b65-bd71-64b349bbc141\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.544658 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9kbh\" (UniqueName: \"kubernetes.io/projected/f475253d-8773-4b65-bd71-64b349bbc141-kube-api-access-g9kbh\") pod \"f475253d-8773-4b65-bd71-64b349bbc141\" (UID: \"f475253d-8773-4b65-bd71-64b349bbc141\") " Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.545196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-utilities" (OuterVolumeSpecName: "utilities") pod "f475253d-8773-4b65-bd71-64b349bbc141" (UID: "f475253d-8773-4b65-bd71-64b349bbc141"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.549338 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f475253d-8773-4b65-bd71-64b349bbc141-kube-api-access-g9kbh" (OuterVolumeSpecName: "kube-api-access-g9kbh") pod "f475253d-8773-4b65-bd71-64b349bbc141" (UID: "f475253d-8773-4b65-bd71-64b349bbc141"). InnerVolumeSpecName "kube-api-access-g9kbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.646884 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.646925 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9kbh\" (UniqueName: \"kubernetes.io/projected/f475253d-8773-4b65-bd71-64b349bbc141-kube-api-access-g9kbh\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.668911 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f475253d-8773-4b65-bd71-64b349bbc141" (UID: "f475253d-8773-4b65-bd71-64b349bbc141"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.748791 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f475253d-8773-4b65-bd71-64b349bbc141-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.757804 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.759568 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.951923 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d856j" event={"ID":"f475253d-8773-4b65-bd71-64b349bbc141","Type":"ContainerDied","Data":"1b318d2386aadd6d59acceeab991cdfd130bbabe271e3f93771d2748e0e44a36"} Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.951984 4895 scope.go:117] "RemoveContainer" containerID="ac26994279e6f50760492903b6fc29fd978368507e383ae68640de53ab64efe5" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.951951 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d856j" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.955578 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cf257472-30d7-4719-9b36-47c30f1db7ec","Type":"ContainerStarted","Data":"592289ccf63d36f99de0c41ad4893019feb1c8238b8005599a976ecd7e6fd991"} Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.981564 4895 scope.go:117] "RemoveContainer" containerID="8ffab875effe79b4719b305976c638a52f175102d3c5b5a3d24d0dbab326a722" Dec 06 07:25:24 crc kubenswrapper[4895]: I1206 07:25:24.989901 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.304891995 podStartE2EDuration="38.989865557s" podCreationTimestamp="2025-12-06 07:24:46 +0000 UTC" firstStartedPulling="2025-12-06 07:24:47.471287457 +0000 UTC m=+1649.872676327" lastFinishedPulling="2025-12-06 07:25:24.156261009 +0000 UTC m=+1686.557649889" observedRunningTime="2025-12-06 07:25:24.980984129 +0000 UTC m=+1687.382373009" watchObservedRunningTime="2025-12-06 07:25:24.989865557 +0000 UTC m=+1687.391254427" Dec 06 07:25:25 crc kubenswrapper[4895]: I1206 07:25:25.006102 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d856j"] Dec 06 07:25:25 crc kubenswrapper[4895]: I1206 07:25:25.011210 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d856j"] Dec 06 07:25:25 crc kubenswrapper[4895]: I1206 07:25:25.014564 4895 scope.go:117] "RemoveContainer" containerID="fd748ac767e0132e5e0889a8f5b6992e4af40ef30101a2514874f14fe30f865a" Dec 06 07:25:25 crc kubenswrapper[4895]: I1206 07:25:25.967222 4895 generic.go:334] "Generic (PLEG): container finished" podID="fb6a838b-c173-4455-b8b5-b152aeee463a" containerID="bdd2cdbec2e42c278fef4643e911a6317453915ea279a580dff8d34441275ca6" exitCode=0 Dec 06 07:25:25 crc kubenswrapper[4895]: I1206 07:25:25.967314 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9qg6" event={"ID":"fb6a838b-c173-4455-b8b5-b152aeee463a","Type":"ContainerDied","Data":"bdd2cdbec2e42c278fef4643e911a6317453915ea279a580dff8d34441275ca6"} Dec 06 07:25:26 crc kubenswrapper[4895]: I1206 07:25:26.060843 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f475253d-8773-4b65-bd71-64b349bbc141" path="/var/lib/kubelet/pods/f475253d-8773-4b65-bd71-64b349bbc141/volumes" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.291812 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9qg6" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.403638 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-config-data\") pod \"fb6a838b-c173-4455-b8b5-b152aeee463a\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.403706 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87ftz\" (UniqueName: \"kubernetes.io/projected/fb6a838b-c173-4455-b8b5-b152aeee463a-kube-api-access-87ftz\") pod \"fb6a838b-c173-4455-b8b5-b152aeee463a\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.403727 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-combined-ca-bundle\") pod \"fb6a838b-c173-4455-b8b5-b152aeee463a\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.403808 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6a838b-c173-4455-b8b5-b152aeee463a-logs\") pod \"fb6a838b-c173-4455-b8b5-b152aeee463a\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.403940 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-scripts\") pod \"fb6a838b-c173-4455-b8b5-b152aeee463a\" (UID: \"fb6a838b-c173-4455-b8b5-b152aeee463a\") " Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.404183 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6a838b-c173-4455-b8b5-b152aeee463a-logs" (OuterVolumeSpecName: "logs") pod "fb6a838b-c173-4455-b8b5-b152aeee463a" (UID: "fb6a838b-c173-4455-b8b5-b152aeee463a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.404742 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6a838b-c173-4455-b8b5-b152aeee463a-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.409670 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6a838b-c173-4455-b8b5-b152aeee463a-kube-api-access-87ftz" (OuterVolumeSpecName: "kube-api-access-87ftz") pod "fb6a838b-c173-4455-b8b5-b152aeee463a" (UID: "fb6a838b-c173-4455-b8b5-b152aeee463a"). InnerVolumeSpecName "kube-api-access-87ftz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.409879 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-scripts" (OuterVolumeSpecName: "scripts") pod "fb6a838b-c173-4455-b8b5-b152aeee463a" (UID: "fb6a838b-c173-4455-b8b5-b152aeee463a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.430833 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb6a838b-c173-4455-b8b5-b152aeee463a" (UID: "fb6a838b-c173-4455-b8b5-b152aeee463a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.440319 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-config-data" (OuterVolumeSpecName: "config-data") pod "fb6a838b-c173-4455-b8b5-b152aeee463a" (UID: "fb6a838b-c173-4455-b8b5-b152aeee463a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.506378 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.506416 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.506432 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87ftz\" (UniqueName: \"kubernetes.io/projected/fb6a838b-c173-4455-b8b5-b152aeee463a-kube-api-access-87ftz\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.506443 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6a838b-c173-4455-b8b5-b152aeee463a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.992258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9qg6" event={"ID":"fb6a838b-c173-4455-b8b5-b152aeee463a","Type":"ContainerDied","Data":"9402c416985496da561615671194275a5640eb7d893ec8f646f9c4e51274095c"} Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.992307 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9402c416985496da561615671194275a5640eb7d893ec8f646f9c4e51274095c" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.992308 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9qg6" Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.994628 4895 generic.go:334] "Generic (PLEG): container finished" podID="584b2782-7ea9-4697-8862-ae2090bc918c" containerID="e0eab4e7e98f956fc203ece89fc3349c56301eb127b1a7da0e17f47ea8ecc398" exitCode=0 Dec 06 07:25:27 crc kubenswrapper[4895]: I1206 07:25:27.994676 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtbpj" event={"ID":"584b2782-7ea9-4697-8862-ae2090bc918c","Type":"ContainerDied","Data":"e0eab4e7e98f956fc203ece89fc3349c56301eb127b1a7da0e17f47ea8ecc398"} Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.199795 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f74647bf4-9dcq9"] Dec 06 07:25:28 crc kubenswrapper[4895]: E1206 07:25:28.200154 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6a838b-c173-4455-b8b5-b152aeee463a" containerName="placement-db-sync" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.200171 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6a838b-c173-4455-b8b5-b152aeee463a" containerName="placement-db-sync" Dec 06 07:25:28 crc kubenswrapper[4895]: E1206 07:25:28.200191 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="extract-utilities" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.200197 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="extract-utilities" Dec 06 07:25:28 crc kubenswrapper[4895]: E1206 07:25:28.200208 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.200213 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" Dec 06 07:25:28 crc kubenswrapper[4895]: E1206 07:25:28.200231 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="extract-content" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.200237 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="extract-content" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.200422 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6a838b-c173-4455-b8b5-b152aeee463a" containerName="placement-db-sync" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.200439 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f475253d-8773-4b65-bd71-64b349bbc141" containerName="registry-server" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.201430 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.205157 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.205616 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.207965 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jld8h" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.208640 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.208799 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.224765 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f74647bf4-9dcq9"] Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.321842 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e55858-e444-489b-b573-aae00aa71f9b-logs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.323544 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-config-data\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.323705 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-internal-tls-certs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.323979 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7f6\" (UniqueName: \"kubernetes.io/projected/a2e55858-e444-489b-b573-aae00aa71f9b-kube-api-access-zh7f6\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.324241 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-scripts\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.324447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-public-tls-certs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.324646 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-combined-ca-bundle\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426769 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-scripts\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-public-tls-certs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426874 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-combined-ca-bundle\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426899 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e55858-e444-489b-b573-aae00aa71f9b-logs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426925 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-config-data\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426944 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-internal-tls-certs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.426991 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7f6\" (UniqueName: \"kubernetes.io/projected/a2e55858-e444-489b-b573-aae00aa71f9b-kube-api-access-zh7f6\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.427711 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e55858-e444-489b-b573-aae00aa71f9b-logs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.432568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-internal-tls-certs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.432853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-public-tls-certs\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.433081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-combined-ca-bundle\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.433293 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-scripts\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.433727 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-config-data\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.443163 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7f6\" (UniqueName: \"kubernetes.io/projected/a2e55858-e444-489b-b573-aae00aa71f9b-kube-api-access-zh7f6\") pod \"placement-7f74647bf4-9dcq9\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.528965 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:28 crc kubenswrapper[4895]: I1206 07:25:28.980701 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f74647bf4-9dcq9"] Dec 06 07:25:28 crc kubenswrapper[4895]: W1206 07:25:28.985679 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e55858_e444_489b_b573_aae00aa71f9b.slice/crio-339faa25e0fb858105c5c9e060e0c57850eba2eeb8824b7cc565400eb544287b WatchSource:0}: Error finding container 339faa25e0fb858105c5c9e060e0c57850eba2eeb8824b7cc565400eb544287b: Status 404 returned error can't find the container with id 339faa25e0fb858105c5c9e060e0c57850eba2eeb8824b7cc565400eb544287b Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.005578 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f74647bf4-9dcq9" event={"ID":"a2e55858-e444-489b-b573-aae00aa71f9b","Type":"ContainerStarted","Data":"339faa25e0fb858105c5c9e060e0c57850eba2eeb8824b7cc565400eb544287b"} Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.258786 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.442761 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-combined-ca-bundle\") pod \"584b2782-7ea9-4697-8862-ae2090bc918c\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.442859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkb4w\" (UniqueName: \"kubernetes.io/projected/584b2782-7ea9-4697-8862-ae2090bc918c-kube-api-access-bkb4w\") pod \"584b2782-7ea9-4697-8862-ae2090bc918c\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.442929 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-db-sync-config-data\") pod \"584b2782-7ea9-4697-8862-ae2090bc918c\" (UID: \"584b2782-7ea9-4697-8862-ae2090bc918c\") " Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.448829 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584b2782-7ea9-4697-8862-ae2090bc918c-kube-api-access-bkb4w" (OuterVolumeSpecName: "kube-api-access-bkb4w") pod "584b2782-7ea9-4697-8862-ae2090bc918c" (UID: "584b2782-7ea9-4697-8862-ae2090bc918c"). InnerVolumeSpecName "kube-api-access-bkb4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.450791 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "584b2782-7ea9-4697-8862-ae2090bc918c" (UID: "584b2782-7ea9-4697-8862-ae2090bc918c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.474079 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "584b2782-7ea9-4697-8862-ae2090bc918c" (UID: "584b2782-7ea9-4697-8862-ae2090bc918c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.544729 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.544764 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkb4w\" (UniqueName: \"kubernetes.io/projected/584b2782-7ea9-4697-8862-ae2090bc918c-kube-api-access-bkb4w\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:29 crc kubenswrapper[4895]: I1206 07:25:29.544778 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/584b2782-7ea9-4697-8862-ae2090bc918c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.016101 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtbpj" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.016121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtbpj" event={"ID":"584b2782-7ea9-4697-8862-ae2090bc918c","Type":"ContainerDied","Data":"c9b38b48ae9c51cf88e0646e2e2bb19b6b9810f9f9c256599972b77e8299b260"} Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.016781 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b38b48ae9c51cf88e0646e2e2bb19b6b9810f9f9c256599972b77e8299b260" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.018349 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f74647bf4-9dcq9" event={"ID":"a2e55858-e444-489b-b573-aae00aa71f9b","Type":"ContainerStarted","Data":"8673bbafab775ecc7fd8889b8d5591b54abb229821a4e6d6b5e7d9f3b443717d"} Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.018388 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f74647bf4-9dcq9" event={"ID":"a2e55858-e444-489b-b573-aae00aa71f9b","Type":"ContainerStarted","Data":"63e32ce9cc5b3c386ddc984f3d5e3485a878384f7b60dac8c6d438b52980f3be"} Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.018506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.060096 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f74647bf4-9dcq9" podStartSLOduration=2.060072976 podStartE2EDuration="2.060072976s" podCreationTimestamp="2025-12-06 07:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:25:30.058930876 +0000 UTC m=+1692.460319756" watchObservedRunningTime="2025-12-06 07:25:30.060072976 +0000 UTC m=+1692.461461846" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.305973 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-698445c967-xk6g2"] Dec 06 07:25:30 crc kubenswrapper[4895]: E1206 07:25:30.306363 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584b2782-7ea9-4697-8862-ae2090bc918c" containerName="barbican-db-sync" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.306375 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="584b2782-7ea9-4697-8862-ae2090bc918c" containerName="barbican-db-sync" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.306552 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="584b2782-7ea9-4697-8862-ae2090bc918c" containerName="barbican-db-sync" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.307367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.309208 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-44m6r" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.315806 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.315809 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.352955 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-698445c967-xk6g2"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.458820 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmcsb\" (UniqueName: \"kubernetes.io/projected/46664967-bc44-4dd5-8fa7-419d1f7741fd-kube-api-access-hmcsb\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.458892 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.458918 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46664967-bc44-4dd5-8fa7-419d1f7741fd-logs\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.458999 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data-custom\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.459033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-combined-ca-bundle\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.472851 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-859c997494-lcf5z"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.477885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.480719 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.492830 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-859c997494-lcf5z"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.561542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data-custom\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.561923 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-combined-ca-bundle\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.562091 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmcsb\" (UniqueName: \"kubernetes.io/projected/46664967-bc44-4dd5-8fa7-419d1f7741fd-kube-api-access-hmcsb\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.562241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.562346 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46664967-bc44-4dd5-8fa7-419d1f7741fd-logs\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.563056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46664967-bc44-4dd5-8fa7-419d1f7741fd-logs\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.581194 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f6d99f797-zszb6"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.582702 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.578598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data-custom\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.585272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-combined-ca-bundle\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.603767 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.615508 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f6d99f797-zszb6"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.631280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmcsb\" (UniqueName: \"kubernetes.io/projected/46664967-bc44-4dd5-8fa7-419d1f7741fd-kube-api-access-hmcsb\") pod \"barbican-worker-698445c967-xk6g2\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.672813 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-sb\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n65q\" (UniqueName: \"kubernetes.io/projected/0921ccd3-f346-46b9-88af-e165de8ff32b-kube-api-access-6n65q\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-svc\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-nb\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673387 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-swift-storage-0\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673544 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-combined-ca-bundle\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673620 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-config\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673780 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0921ccd3-f346-46b9-88af-e165de8ff32b-logs\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673882 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv248\" (UniqueName: \"kubernetes.io/projected/a53968c6-e872-491e-802f-bf76d49b1126-kube-api-access-dv248\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.673955 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data-custom\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.700458 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.755904 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74c7b6dbd-5dc68"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.757853 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.767206 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.770531 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74c7b6dbd-5dc68"] Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.776747 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-svc\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.776886 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-nb\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.776928 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-swift-storage-0\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.776970 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-combined-ca-bundle\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.776994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-config\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.777032 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.777057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0921ccd3-f346-46b9-88af-e165de8ff32b-logs\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.777120 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv248\" (UniqueName: \"kubernetes.io/projected/a53968c6-e872-491e-802f-bf76d49b1126-kube-api-access-dv248\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.777169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data-custom\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.777217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-sb\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.777255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n65q\" (UniqueName: \"kubernetes.io/projected/0921ccd3-f346-46b9-88af-e165de8ff32b-kube-api-access-6n65q\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.778153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0921ccd3-f346-46b9-88af-e165de8ff32b-logs\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.779073 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-svc\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.779153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-sb\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.782449 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-swift-storage-0\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.784899 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-config\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.785386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-nb\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.785905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data-custom\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.793824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-combined-ca-bundle\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.796735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv248\" (UniqueName: \"kubernetes.io/projected/a53968c6-e872-491e-802f-bf76d49b1126-kube-api-access-dv248\") pod \"dnsmasq-dns-5f6d99f797-zszb6\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.802212 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n65q\" (UniqueName: \"kubernetes.io/projected/0921ccd3-f346-46b9-88af-e165de8ff32b-kube-api-access-6n65q\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.822309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data\") pod \"barbican-keystone-listener-859c997494-lcf5z\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.879557 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwlz\" (UniqueName: \"kubernetes.io/projected/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-kube-api-access-krwlz\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.879753 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.880071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data-custom\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.880276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-logs\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.880674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-combined-ca-bundle\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.982120 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.982523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data-custom\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.982544 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-logs\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.982567 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-combined-ca-bundle\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.982647 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwlz\" (UniqueName: \"kubernetes.io/projected/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-kube-api-access-krwlz\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.983829 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-logs\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.988119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-combined-ca-bundle\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.989598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:30 crc kubenswrapper[4895]: I1206 07:25:30.993035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data-custom\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.005325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwlz\" (UniqueName: \"kubernetes.io/projected/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-kube-api-access-krwlz\") pod \"barbican-api-74c7b6dbd-5dc68\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.032760 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.034485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.099022 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.160002 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.403205 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-698445c967-xk6g2"] Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.673908 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-859c997494-lcf5z"] Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.765679 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74c7b6dbd-5dc68"] Dec 06 07:25:31 crc kubenswrapper[4895]: I1206 07:25:31.815130 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f6d99f797-zszb6"] Dec 06 07:25:32 crc kubenswrapper[4895]: I1206 07:25:32.041052 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" event={"ID":"0921ccd3-f346-46b9-88af-e165de8ff32b","Type":"ContainerStarted","Data":"d081981a6580c0efb5613bcfab20f4197943160d8b01ee30f1f22f1cebb943c2"} Dec 06 07:25:32 crc kubenswrapper[4895]: I1206 07:25:32.042765 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c7b6dbd-5dc68" event={"ID":"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2","Type":"ContainerStarted","Data":"26fa1dbb83e9ed273f553e6963bd5241148d37bf6741210179603303a00229b6"} Dec 06 07:25:32 crc kubenswrapper[4895]: I1206 07:25:32.044011 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" event={"ID":"a53968c6-e872-491e-802f-bf76d49b1126","Type":"ContainerStarted","Data":"672f2b1debd3f15d176af49cdb591f48f4bafb746ddafb7bc4a81220bc1bdb48"} Dec 06 07:25:32 crc kubenswrapper[4895]: I1206 07:25:32.046045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698445c967-xk6g2" event={"ID":"46664967-bc44-4dd5-8fa7-419d1f7741fd","Type":"ContainerStarted","Data":"df2c2628c8c415822da1079ad92f9bc8f4d64f7fc0fe08c1cdb4c7031f860004"} Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.061266 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" event={"ID":"a53968c6-e872-491e-802f-bf76d49b1126","Type":"ContainerStarted","Data":"cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b"} Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.065889 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c7b6dbd-5dc68" event={"ID":"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2","Type":"ContainerStarted","Data":"89e285a35193e92fee8f1c099edc790ab0dabd33a64da76146996b644f40fd9e"} Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.844849 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55c5754c9b-2g2cg"] Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.848599 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.850864 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.852735 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.865008 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55c5754c9b-2g2cg"] Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954341 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-logs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954587 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-internal-tls-certs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954649 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lph\" (UniqueName: \"kubernetes.io/projected/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-kube-api-access-h9lph\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-public-tls-certs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data-custom\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954874 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-combined-ca-bundle\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:33 crc kubenswrapper[4895]: I1206 07:25:33.954915 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057125 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-logs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057190 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-internal-tls-certs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9lph\" (UniqueName: \"kubernetes.io/projected/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-kube-api-access-h9lph\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-public-tls-certs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data-custom\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057329 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-combined-ca-bundle\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057359 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.057901 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-logs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.070459 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data-custom\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.070887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.071609 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-combined-ca-bundle\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.073194 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-public-tls-certs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.074647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9lph\" (UniqueName: \"kubernetes.io/projected/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-kube-api-access-h9lph\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.077065 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-internal-tls-certs\") pod \"barbican-api-55c5754c9b-2g2cg\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.079396 4895 generic.go:334] "Generic (PLEG): container finished" podID="a53968c6-e872-491e-802f-bf76d49b1126" containerID="cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b" exitCode=0 Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.079463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" event={"ID":"a53968c6-e872-491e-802f-bf76d49b1126","Type":"ContainerDied","Data":"cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b"} Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.087406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c7b6dbd-5dc68" event={"ID":"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2","Type":"ContainerStarted","Data":"c773e587cb726ea1655a3226b4131be088b2cde0029ca6d67703eade89fe0caf"} Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.168234 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:34 crc kubenswrapper[4895]: W1206 07:25:34.652939 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24216683_0ca8_44dd_8bfa_a7d0a84cf3cc.slice/crio-b078dcddcc26910fa28559e80886b0745b977897d5c5be7e16dbd09f5ce2e0cc WatchSource:0}: Error finding container b078dcddcc26910fa28559e80886b0745b977897d5c5be7e16dbd09f5ce2e0cc: Status 404 returned error can't find the container with id b078dcddcc26910fa28559e80886b0745b977897d5c5be7e16dbd09f5ce2e0cc Dec 06 07:25:34 crc kubenswrapper[4895]: I1206 07:25:34.654448 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55c5754c9b-2g2cg"] Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.123779 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" event={"ID":"a53968c6-e872-491e-802f-bf76d49b1126","Type":"ContainerStarted","Data":"fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae"} Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.125336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c5754c9b-2g2cg" event={"ID":"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc","Type":"ContainerStarted","Data":"b078dcddcc26910fa28559e80886b0745b977897d5c5be7e16dbd09f5ce2e0cc"} Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.125381 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.125400 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.150720 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74c7b6dbd-5dc68" podStartSLOduration=5.150695292 podStartE2EDuration="5.150695292s" podCreationTimestamp="2025-12-06 07:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:25:35.140543149 +0000 UTC m=+1697.541932019" watchObservedRunningTime="2025-12-06 07:25:35.150695292 +0000 UTC m=+1697.552084162" Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.188190 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.188453 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-central-agent" containerID="cri-o://78f3a29d45d3d925d78072c3abeb5f2cf8d565c52d83bc7190619c62f136417e" gracePeriod=30 Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.188588 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="sg-core" containerID="cri-o://2415954ecad81457802f015db6daee7be33dc9246e9d07d3767c276cd679221c" gracePeriod=30 Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.188616 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-notification-agent" containerID="cri-o://6d62dd7c6dd08e80e39c5109c5985e815de052be26e82159b042f00ee70fce9c" gracePeriod=30 Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.188728 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="proxy-httpd" containerID="cri-o://0e998093a5744fc963d027f5d3fd2e260009a092673c70fc61ce0e4946af1f83" gracePeriod=30 Dec 06 07:25:35 crc kubenswrapper[4895]: I1206 07:25:35.289466 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.146:3000/\": read tcp 10.217.0.2:57432->10.217.0.146:3000: read: connection reset by peer" Dec 06 07:25:36 crc kubenswrapper[4895]: I1206 07:25:36.140407 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerID="2415954ecad81457802f015db6daee7be33dc9246e9d07d3767c276cd679221c" exitCode=2 Dec 06 07:25:36 crc kubenswrapper[4895]: I1206 07:25:36.140485 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerDied","Data":"2415954ecad81457802f015db6daee7be33dc9246e9d07d3767c276cd679221c"} Dec 06 07:25:38 crc kubenswrapper[4895]: I1206 07:25:38.053020 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:25:38 crc kubenswrapper[4895]: E1206 07:25:38.053495 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:25:38 crc kubenswrapper[4895]: I1206 07:25:38.158962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c5754c9b-2g2cg" event={"ID":"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc","Type":"ContainerStarted","Data":"f6e1772fc1e3b4a1e20cfa0b70cf73a68a084a2daea75a81ee655fc0fe7abc9e"} Dec 06 07:25:38 crc kubenswrapper[4895]: I1206 07:25:38.161594 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerID="0e998093a5744fc963d027f5d3fd2e260009a092673c70fc61ce0e4946af1f83" exitCode=0 Dec 06 07:25:38 crc kubenswrapper[4895]: I1206 07:25:38.161639 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerDied","Data":"0e998093a5744fc963d027f5d3fd2e260009a092673c70fc61ce0e4946af1f83"} Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.178095 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerID="78f3a29d45d3d925d78072c3abeb5f2cf8d565c52d83bc7190619c62f136417e" exitCode=0 Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.178158 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerDied","Data":"78f3a29d45d3d925d78072c3abeb5f2cf8d565c52d83bc7190619c62f136417e"} Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.178661 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.182645 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.245090 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" podStartSLOduration=9.245063698 podStartE2EDuration="9.245063698s" podCreationTimestamp="2025-12-06 07:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:25:39.205521886 +0000 UTC m=+1701.606910756" watchObservedRunningTime="2025-12-06 07:25:39.245063698 +0000 UTC m=+1701.646452568" Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.301561 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bfd87765-ptgqn"] Dec 06 07:25:39 crc kubenswrapper[4895]: I1206 07:25:39.302416 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerName="dnsmasq-dns" containerID="cri-o://9a0f5ecc550d27cbfb83305ca5d2198b4247293591f4afde235b092cfe7429b5" gracePeriod=10 Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.201394 4895 generic.go:334] "Generic (PLEG): container finished" podID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerID="9a0f5ecc550d27cbfb83305ca5d2198b4247293591f4afde235b092cfe7429b5" exitCode=0 Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.202046 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" event={"ID":"d2ec0f50-582c-4fba-8d4f-c4da28576e2a","Type":"ContainerDied","Data":"9a0f5ecc550d27cbfb83305ca5d2198b4247293591f4afde235b092cfe7429b5"} Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.212916 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c5754c9b-2g2cg" event={"ID":"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc","Type":"ContainerStarted","Data":"22b5d520fb5b105ce641543b32b2d0e8817cc0873901f3804e0a3eecf575919f"} Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.214188 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.214230 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.225822 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerID="6d62dd7c6dd08e80e39c5109c5985e815de052be26e82159b042f00ee70fce9c" exitCode=0 Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.226886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerDied","Data":"6d62dd7c6dd08e80e39c5109c5985e815de052be26e82159b042f00ee70fce9c"} Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.251771 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55c5754c9b-2g2cg" podStartSLOduration=7.251748328 podStartE2EDuration="7.251748328s" podCreationTimestamp="2025-12-06 07:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:25:40.237409423 +0000 UTC m=+1702.638798303" watchObservedRunningTime="2025-12-06 07:25:40.251748328 +0000 UTC m=+1702.653137198" Dec 06 07:25:40 crc kubenswrapper[4895]: I1206 07:25:40.977157 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.095611 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-sb\") pod \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.096303 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xdd6\" (UniqueName: \"kubernetes.io/projected/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-kube-api-access-5xdd6\") pod \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.096379 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-svc\") pod \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.096466 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-swift-storage-0\") pod \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.096543 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-config\") pod \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.096587 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-nb\") pod \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\" (UID: \"d2ec0f50-582c-4fba-8d4f-c4da28576e2a\") " Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.101038 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-kube-api-access-5xdd6" (OuterVolumeSpecName: "kube-api-access-5xdd6") pod "d2ec0f50-582c-4fba-8d4f-c4da28576e2a" (UID: "d2ec0f50-582c-4fba-8d4f-c4da28576e2a"). InnerVolumeSpecName "kube-api-access-5xdd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.141057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2ec0f50-582c-4fba-8d4f-c4da28576e2a" (UID: "d2ec0f50-582c-4fba-8d4f-c4da28576e2a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.143064 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2ec0f50-582c-4fba-8d4f-c4da28576e2a" (UID: "d2ec0f50-582c-4fba-8d4f-c4da28576e2a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.146713 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2ec0f50-582c-4fba-8d4f-c4da28576e2a" (UID: "d2ec0f50-582c-4fba-8d4f-c4da28576e2a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.146839 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2ec0f50-582c-4fba-8d4f-c4da28576e2a" (UID: "d2ec0f50-582c-4fba-8d4f-c4da28576e2a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.167255 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-config" (OuterVolumeSpecName: "config") pod "d2ec0f50-582c-4fba-8d4f-c4da28576e2a" (UID: "d2ec0f50-582c-4fba-8d4f-c4da28576e2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.198894 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.198930 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xdd6\" (UniqueName: \"kubernetes.io/projected/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-kube-api-access-5xdd6\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.198941 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.198950 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.198958 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.198968 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ec0f50-582c-4fba-8d4f-c4da28576e2a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.241647 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.241707 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bfd87765-ptgqn" event={"ID":"d2ec0f50-582c-4fba-8d4f-c4da28576e2a","Type":"ContainerDied","Data":"003ba2782720c555cec71a8e5f2923ce17efba7d5597974759f1aaa63d1c6ecd"} Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.250609 4895 scope.go:117] "RemoveContainer" containerID="9a0f5ecc550d27cbfb83305ca5d2198b4247293591f4afde235b092cfe7429b5" Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.280207 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bfd87765-ptgqn"] Dec 06 07:25:41 crc kubenswrapper[4895]: I1206 07:25:41.289443 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59bfd87765-ptgqn"] Dec 06 07:25:42 crc kubenswrapper[4895]: I1206 07:25:42.063409 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" path="/var/lib/kubelet/pods/d2ec0f50-582c-4fba-8d4f-c4da28576e2a/volumes" Dec 06 07:25:42 crc kubenswrapper[4895]: I1206 07:25:42.201689 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:25:44 crc kubenswrapper[4895]: E1206 07:25:44.839591 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:d2fbe075d21195b746fd27a073dbd249d38b3c4f81c30d162770a338fb87e338" Dec 06 07:25:44 crc kubenswrapper[4895]: E1206 07:25:44.839767 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-worker-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:d2fbe075d21195b746fd27a073dbd249d38b3c4f81c30d162770a338fb87e338,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-worker.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h58dhfdh89h6h86h75h5bdh666h5d5hdfh589h9fh55hf6h68dh69h8fh575hfbh5f7h5c4hcchf4hdfh5bdh65ch668h594h5f9h59fh94q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmcsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-worker-698445c967-xk6g2_openstack(46664967-bc44-4dd5-8fa7-419d1f7741fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:25:44 crc kubenswrapper[4895]: E1206 07:25:44.842285 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:d2fbe075d21195b746fd27a073dbd249d38b3c4f81c30d162770a338fb87e338\\\"\"]" pod="openstack/barbican-worker-698445c967-xk6g2" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" Dec 06 07:25:45 crc kubenswrapper[4895]: E1206 07:25:45.285105 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:d2fbe075d21195b746fd27a073dbd249d38b3c4f81c30d162770a338fb87e338\\\"\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:d2fbe075d21195b746fd27a073dbd249d38b3c4f81c30d162770a338fb87e338\\\"\"]" pod="openstack/barbican-worker-698445c967-xk6g2" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.671607 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.678192 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.681200 4895 scope.go:117] "RemoveContainer" containerID="6affea60218b6a7b9f9fa4785befa681cf3a9799671151ccabc535560a0fa386" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.769437 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.835782 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892178 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-kube-api-access-tcvcg\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-sg-core-conf-yaml\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892291 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-combined-ca-bundle\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892331 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-scripts\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892376 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-log-httpd\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892410 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-config-data\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.892439 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-run-httpd\") pod \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\" (UID: \"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a\") " Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.896002 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.898223 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.901772 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-kube-api-access-tcvcg" (OuterVolumeSpecName: "kube-api-access-tcvcg") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "kube-api-access-tcvcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.933665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-scripts" (OuterVolumeSpecName: "scripts") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.939944 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.995675 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.995704 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-kube-api-access-tcvcg\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.995718 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.995727 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:45 crc kubenswrapper[4895]: I1206 07:25:45.995737 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.013412 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-config-data" (OuterVolumeSpecName: "config-data") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.112940 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.295255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4b3f48d-e115-4ed9-86a2-1a9209c95e8a","Type":"ContainerDied","Data":"6ea7ff4329b92e145f8bed14024aa006af4eac3ae7a10b0655e7e2e4fc4e3fb8"} Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.295293 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.295670 4895 scope.go:117] "RemoveContainer" containerID="0e998093a5744fc963d027f5d3fd2e260009a092673c70fc61ce0e4946af1f83" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.331863 4895 scope.go:117] "RemoveContainer" containerID="2415954ecad81457802f015db6daee7be33dc9246e9d07d3767c276cd679221c" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.335815 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" (UID: "d4b3f48d-e115-4ed9-86a2-1a9209c95e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.356839 4895 scope.go:117] "RemoveContainer" containerID="6d62dd7c6dd08e80e39c5109c5985e815de052be26e82159b042f00ee70fce9c" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.380422 4895 scope.go:117] "RemoveContainer" containerID="78f3a29d45d3d925d78072c3abeb5f2cf8d565c52d83bc7190619c62f136417e" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.418256 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.650620 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.683897 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.729798 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:46 crc kubenswrapper[4895]: E1206 07:25:46.733069 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="sg-core" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733105 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="sg-core" Dec 06 07:25:46 crc kubenswrapper[4895]: E1206 07:25:46.733118 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-central-agent" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733125 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-central-agent" Dec 06 07:25:46 crc kubenswrapper[4895]: E1206 07:25:46.733147 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="proxy-httpd" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733153 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="proxy-httpd" Dec 06 07:25:46 crc kubenswrapper[4895]: E1206 07:25:46.733163 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerName="dnsmasq-dns" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733169 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerName="dnsmasq-dns" Dec 06 07:25:46 crc kubenswrapper[4895]: E1206 07:25:46.733193 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-notification-agent" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733199 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-notification-agent" Dec 06 07:25:46 crc kubenswrapper[4895]: E1206 07:25:46.733206 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerName="init" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733211 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerName="init" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733535 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ec0f50-582c-4fba-8d4f-c4da28576e2a" containerName="dnsmasq-dns" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733559 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="sg-core" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733569 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="proxy-httpd" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733584 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-notification-agent" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.733593 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" containerName="ceilometer-central-agent" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.735200 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.738947 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.740167 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.742959 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.832742 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-run-httpd\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.832826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.832865 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-log-httpd\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.832964 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dnmw\" (UniqueName: \"kubernetes.io/projected/d3545654-a687-45f8-baf6-d3930df1545f-kube-api-access-9dnmw\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.833010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.833219 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-scripts\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.833296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-config-data\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.916156 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.935791 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.935863 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-log-httpd\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.935929 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dnmw\" (UniqueName: \"kubernetes.io/projected/d3545654-a687-45f8-baf6-d3930df1545f-kube-api-access-9dnmw\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.935975 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.936015 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-scripts\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.936041 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-config-data\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.936112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-run-httpd\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.937113 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-run-httpd\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.937311 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-log-httpd\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.940958 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.946296 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-scripts\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.959659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.961007 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-config-data\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.969225 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dnmw\" (UniqueName: \"kubernetes.io/projected/d3545654-a687-45f8-baf6-d3930df1545f-kube-api-access-9dnmw\") pod \"ceilometer-0\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " pod="openstack/ceilometer-0" Dec 06 07:25:46 crc kubenswrapper[4895]: I1206 07:25:46.996567 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74c7b6dbd-5dc68"] Dec 06 07:25:47 crc kubenswrapper[4895]: I1206 07:25:47.058572 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:25:47 crc kubenswrapper[4895]: I1206 07:25:47.320578 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" containerID="cri-o://89e285a35193e92fee8f1c099edc790ab0dabd33a64da76146996b644f40fd9e" gracePeriod=30 Dec 06 07:25:47 crc kubenswrapper[4895]: I1206 07:25:47.322776 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" containerID="cri-o://c773e587cb726ea1655a3226b4131be088b2cde0029ca6d67703eade89fe0caf" gracePeriod=30 Dec 06 07:25:47 crc kubenswrapper[4895]: I1206 07:25:47.331605 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": EOF" Dec 06 07:25:47 crc kubenswrapper[4895]: I1206 07:25:47.332237 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": EOF" Dec 06 07:25:47 crc kubenswrapper[4895]: I1206 07:25:47.736611 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:48 crc kubenswrapper[4895]: I1206 07:25:48.066525 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b3f48d-e115-4ed9-86a2-1a9209c95e8a" path="/var/lib/kubelet/pods/d4b3f48d-e115-4ed9-86a2-1a9209c95e8a/volumes" Dec 06 07:25:48 crc kubenswrapper[4895]: I1206 07:25:48.331307 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerID="89e285a35193e92fee8f1c099edc790ab0dabd33a64da76146996b644f40fd9e" exitCode=143 Dec 06 07:25:48 crc kubenswrapper[4895]: I1206 07:25:48.331359 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c7b6dbd-5dc68" event={"ID":"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2","Type":"ContainerDied","Data":"89e285a35193e92fee8f1c099edc790ab0dabd33a64da76146996b644f40fd9e"} Dec 06 07:25:48 crc kubenswrapper[4895]: W1206 07:25:48.369019 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3545654_a687_45f8_baf6_d3930df1545f.slice/crio-51ce0f7d8c019ce8ee4640c24e5bc358faaaff2c5a5d1d642ebb7611d3982609 WatchSource:0}: Error finding container 51ce0f7d8c019ce8ee4640c24e5bc358faaaff2c5a5d1d642ebb7611d3982609: Status 404 returned error can't find the container with id 51ce0f7d8c019ce8ee4640c24e5bc358faaaff2c5a5d1d642ebb7611d3982609 Dec 06 07:25:49 crc kubenswrapper[4895]: I1206 07:25:49.050575 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:25:49 crc kubenswrapper[4895]: E1206 07:25:49.051158 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:25:49 crc kubenswrapper[4895]: I1206 07:25:49.345021 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" event={"ID":"0921ccd3-f346-46b9-88af-e165de8ff32b","Type":"ContainerStarted","Data":"f772918d22085feff7f986c973ad071936dcff2e69c1f8d298ba9eba11341e18"} Dec 06 07:25:49 crc kubenswrapper[4895]: I1206 07:25:49.345901 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerStarted","Data":"51ce0f7d8c019ce8ee4640c24e5bc358faaaff2c5a5d1d642ebb7611d3982609"} Dec 06 07:25:50 crc kubenswrapper[4895]: I1206 07:25:50.359189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" event={"ID":"0921ccd3-f346-46b9-88af-e165de8ff32b","Type":"ContainerStarted","Data":"9eef442bb2d8f458e303a110fade0f3f49397f1f834b6ae2ba2186b103907a9d"} Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.761683 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:52342->10.217.0.154:9311: read: connection reset by peer" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.761693 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:52366->10.217.0.154:9311: read: connection reset by peer" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.761804 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:52352->10.217.0.154:9311: read: connection reset by peer" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.762628 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": dial tcp 10.217.0.154:9311: connect: connection refused" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.762768 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.763052 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74c7b6dbd-5dc68" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": dial tcp 10.217.0.154:9311: connect: connection refused" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.763177 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:51 crc kubenswrapper[4895]: I1206 07:25:51.785063 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" podStartSLOduration=4.870354959 podStartE2EDuration="21.785036079s" podCreationTimestamp="2025-12-06 07:25:30 +0000 UTC" firstStartedPulling="2025-12-06 07:25:31.679750257 +0000 UTC m=+1694.081139127" lastFinishedPulling="2025-12-06 07:25:48.594431377 +0000 UTC m=+1710.995820247" observedRunningTime="2025-12-06 07:25:50.408118501 +0000 UTC m=+1712.809507371" watchObservedRunningTime="2025-12-06 07:25:51.785036079 +0000 UTC m=+1714.186424949" Dec 06 07:25:52 crc kubenswrapper[4895]: I1206 07:25:52.178675 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:25:52 crc kubenswrapper[4895]: I1206 07:25:52.376550 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerStarted","Data":"16eefc5d837bb7747b6800d39d9c6c13c4aeda660f37ddea7ac90708f74e6e41"} Dec 06 07:25:52 crc kubenswrapper[4895]: I1206 07:25:52.378654 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerID="c773e587cb726ea1655a3226b4131be088b2cde0029ca6d67703eade89fe0caf" exitCode=0 Dec 06 07:25:52 crc kubenswrapper[4895]: I1206 07:25:52.378693 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c7b6dbd-5dc68" event={"ID":"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2","Type":"ContainerDied","Data":"c773e587cb726ea1655a3226b4131be088b2cde0029ca6d67703eade89fe0caf"} Dec 06 07:25:52 crc kubenswrapper[4895]: I1206 07:25:52.935348 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.052255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-combined-ca-bundle\") pod \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.052735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data\") pod \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.053009 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-logs\") pod \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.053136 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krwlz\" (UniqueName: \"kubernetes.io/projected/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-kube-api-access-krwlz\") pod \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.053324 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data-custom\") pod \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\" (UID: \"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2\") " Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.053464 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-logs" (OuterVolumeSpecName: "logs") pod "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" (UID: "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.053991 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.058953 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" (UID: "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.059687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-kube-api-access-krwlz" (OuterVolumeSpecName: "kube-api-access-krwlz") pod "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" (UID: "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2"). InnerVolumeSpecName "kube-api-access-krwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.086493 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" (UID: "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.106650 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data" (OuterVolumeSpecName: "config-data") pod "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" (UID: "ad96eeff-3f40-4567-9a88-0d7ed6fc13c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.155595 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krwlz\" (UniqueName: \"kubernetes.io/projected/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-kube-api-access-krwlz\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.155833 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.155904 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.155974 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.393699 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c7b6dbd-5dc68" event={"ID":"ad96eeff-3f40-4567-9a88-0d7ed6fc13c2","Type":"ContainerDied","Data":"26fa1dbb83e9ed273f553e6963bd5241148d37bf6741210179603303a00229b6"} Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.393758 4895 scope.go:117] "RemoveContainer" containerID="c773e587cb726ea1655a3226b4131be088b2cde0029ca6d67703eade89fe0caf" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.393840 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74c7b6dbd-5dc68" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.430755 4895 scope.go:117] "RemoveContainer" containerID="89e285a35193e92fee8f1c099edc790ab0dabd33a64da76146996b644f40fd9e" Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.451385 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74c7b6dbd-5dc68"] Dec 06 07:25:53 crc kubenswrapper[4895]: I1206 07:25:53.463482 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74c7b6dbd-5dc68"] Dec 06 07:25:53 crc kubenswrapper[4895]: E1206 07:25:53.589310 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad96eeff_3f40_4567_9a88_0d7ed6fc13c2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b37f8f_5d15_4d1b_aab9_c4852295dcd4.slice/crio-conmon-f86240a4c5102f3ed5bdfa5a65cd0b3f6262f647bcf567c98c4795854511e25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad96eeff_3f40_4567_9a88_0d7ed6fc13c2.slice/crio-26fa1dbb83e9ed273f553e6963bd5241148d37bf6741210179603303a00229b6\": RecentStats: unable to find data in memory cache]" Dec 06 07:25:54 crc kubenswrapper[4895]: I1206 07:25:54.061947 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" path="/var/lib/kubelet/pods/ad96eeff-3f40-4567-9a88-0d7ed6fc13c2/volumes" Dec 06 07:25:54 crc kubenswrapper[4895]: I1206 07:25:54.408814 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" containerID="f86240a4c5102f3ed5bdfa5a65cd0b3f6262f647bcf567c98c4795854511e25d" exitCode=0 Dec 06 07:25:54 crc kubenswrapper[4895]: I1206 07:25:54.408883 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dq2gw" event={"ID":"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4","Type":"ContainerDied","Data":"f86240a4c5102f3ed5bdfa5a65cd0b3f6262f647bcf567c98c4795854511e25d"} Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.734897 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.804974 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-etc-machine-id\") pod \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805052 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" (UID: "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805141 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-db-sync-config-data\") pod \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805232 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-config-data\") pod \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805258 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-combined-ca-bundle\") pod \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805279 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-scripts\") pod \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805360 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpdrh\" (UniqueName: \"kubernetes.io/projected/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-kube-api-access-bpdrh\") pod \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\" (UID: \"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4\") " Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.805811 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.810244 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-kube-api-access-bpdrh" (OuterVolumeSpecName: "kube-api-access-bpdrh") pod "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" (UID: "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4"). InnerVolumeSpecName "kube-api-access-bpdrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.810707 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" (UID: "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.812303 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-scripts" (OuterVolumeSpecName: "scripts") pod "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" (UID: "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.833003 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" (UID: "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.851747 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-config-data" (OuterVolumeSpecName: "config-data") pod "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" (UID: "f4b37f8f-5d15-4d1b-aab9-c4852295dcd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.907404 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.907441 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.907452 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.907462 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:55 crc kubenswrapper[4895]: I1206 07:25:55.907486 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpdrh\" (UniqueName: \"kubernetes.io/projected/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4-kube-api-access-bpdrh\") on node \"crc\" DevicePath \"\"" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.429413 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dq2gw" event={"ID":"f4b37f8f-5d15-4d1b-aab9-c4852295dcd4","Type":"ContainerDied","Data":"c23d7a83e413f68f3cc994f343d9d4e9347ba4aa670aea532b3e091451728191"} Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.429451 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dq2gw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.429501 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23d7a83e413f68f3cc994f343d9d4e9347ba4aa670aea532b3e091451728191" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.733935 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:25:56 crc kubenswrapper[4895]: E1206 07:25:56.734595 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.734615 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" Dec 06 07:25:56 crc kubenswrapper[4895]: E1206 07:25:56.734643 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" containerName="cinder-db-sync" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.734649 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" containerName="cinder-db-sync" Dec 06 07:25:56 crc kubenswrapper[4895]: E1206 07:25:56.734666 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.734673 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.734841 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api-log" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.734856 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" containerName="cinder-db-sync" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.734874 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad96eeff-3f40-4567-9a88-0d7ed6fc13c2" containerName="barbican-api" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.735848 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.738226 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cr22b" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.738573 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.738738 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.745107 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.771215 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8495c879d5-xlttw"] Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.772752 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.783674 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.797543 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8495c879d5-xlttw"] Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829545 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829585 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829659 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-scripts\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829734 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45qb8\" (UniqueName: \"kubernetes.io/projected/64e4925c-937e-47d3-8aab-bfc524875263-kube-api-access-45qb8\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829753 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64e4925c-937e-47d3-8aab-bfc524875263-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829801 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829836 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-config\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829904 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-svc\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829939 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvqc\" (UniqueName: \"kubernetes.io/projected/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-kube-api-access-9kvqc\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.829954 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931382 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-scripts\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931546 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45qb8\" (UniqueName: \"kubernetes.io/projected/64e4925c-937e-47d3-8aab-bfc524875263-kube-api-access-45qb8\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64e4925c-937e-47d3-8aab-bfc524875263-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931610 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-config\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931706 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-svc\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvqc\" (UniqueName: \"kubernetes.io/projected/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-kube-api-access-9kvqc\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931815 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.931870 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.932283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64e4925c-937e-47d3-8aab-bfc524875263-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.933457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.933678 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.934830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-config\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.934977 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-svc\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.935028 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.939759 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.939758 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-scripts\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.951105 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.951359 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45qb8\" (UniqueName: \"kubernetes.io/projected/64e4925c-937e-47d3-8aab-bfc524875263-kube-api-access-45qb8\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.953622 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvqc\" (UniqueName: \"kubernetes.io/projected/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-kube-api-access-9kvqc\") pod \"dnsmasq-dns-8495c879d5-xlttw\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.959002 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " pod="openstack/cinder-scheduler-0" Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.995729 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:25:56 crc kubenswrapper[4895]: I1206 07:25:56.997876 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.001911 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.011101 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.036796 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-etc-machine-id\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.036904 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.037000 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2mth\" (UniqueName: \"kubernetes.io/projected/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-kube-api-access-r2mth\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.037066 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.037106 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-scripts\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.037147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-logs\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.037218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data-custom\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.065215 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.103700 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138524 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-etc-machine-id\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138648 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2mth\" (UniqueName: \"kubernetes.io/projected/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-kube-api-access-r2mth\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138721 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-scripts\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138743 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-logs\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.138816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data-custom\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.140089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-logs\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.141150 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-etc-machine-id\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.143237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-scripts\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.145214 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.145455 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data-custom\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.149569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.158691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2mth\" (UniqueName: \"kubernetes.io/projected/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-kube-api-access-r2mth\") pod \"cinder-api-0\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.343599 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.458198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerStarted","Data":"6c4be519f3a2c4d38ac94febdc45c7d1d7ff2038f59d6448fc206bd8e235b42d"} Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.603866 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.736348 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8495c879d5-xlttw"] Dec 06 07:25:57 crc kubenswrapper[4895]: W1206 07:25:57.746260 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod494c9693_fb7c_468c_8a34_a6fcfbd35fd7.slice/crio-4d8fc2b37c87225ce7a9941c3d5f63a613e13800fb1f6b6ee019ce06da3a610e WatchSource:0}: Error finding container 4d8fc2b37c87225ce7a9941c3d5f63a613e13800fb1f6b6ee019ce06da3a610e: Status 404 returned error can't find the container with id 4d8fc2b37c87225ce7a9941c3d5f63a613e13800fb1f6b6ee019ce06da3a610e Dec 06 07:25:57 crc kubenswrapper[4895]: I1206 07:25:57.862119 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:25:57 crc kubenswrapper[4895]: W1206 07:25:57.868377 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13f63fc7_ebd6_4e41_ba4f_c5ea93fc9c31.slice/crio-934bd17c3e86ba7e61c82c222ae4413bf1742e01ba84336e7fa9cbee49f7f2f7 WatchSource:0}: Error finding container 934bd17c3e86ba7e61c82c222ae4413bf1742e01ba84336e7fa9cbee49f7f2f7: Status 404 returned error can't find the container with id 934bd17c3e86ba7e61c82c222ae4413bf1742e01ba84336e7fa9cbee49f7f2f7 Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.482040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31","Type":"ContainerStarted","Data":"934bd17c3e86ba7e61c82c222ae4413bf1742e01ba84336e7fa9cbee49f7f2f7"} Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.487209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64e4925c-937e-47d3-8aab-bfc524875263","Type":"ContainerStarted","Data":"0520cd2f7198765f1dfb7d6aa9e5355c5ff72ad7e8f416de075f3d895de97032"} Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.489173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" event={"ID":"494c9693-fb7c-468c-8a34-a6fcfbd35fd7","Type":"ContainerStarted","Data":"4d8fc2b37c87225ce7a9941c3d5f63a613e13800fb1f6b6ee019ce06da3a610e"} Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.758598 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mll22"] Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.762366 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.768971 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mll22"] Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.898517 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-utilities\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.898579 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnq5\" (UniqueName: \"kubernetes.io/projected/15ddd748-caf4-44ee-a574-50249d2ac07d-kube-api-access-fsnq5\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.898634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-catalog-content\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:58 crc kubenswrapper[4895]: I1206 07:25:58.984600 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.001405 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-catalog-content\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.001800 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-utilities\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.001828 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsnq5\" (UniqueName: \"kubernetes.io/projected/15ddd748-caf4-44ee-a574-50249d2ac07d-kube-api-access-fsnq5\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.002026 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-catalog-content\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.002139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-utilities\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.057192 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsnq5\" (UniqueName: \"kubernetes.io/projected/15ddd748-caf4-44ee-a574-50249d2ac07d-kube-api-access-fsnq5\") pod \"community-operators-mll22\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.129434 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.501198 4895 generic.go:334] "Generic (PLEG): container finished" podID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerID="c465e1d35ba88deeabcbb44d314e3d968ac137c77b3b91f2c1758a076ac60ad2" exitCode=0 Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.501298 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" event={"ID":"494c9693-fb7c-468c-8a34-a6fcfbd35fd7","Type":"ContainerDied","Data":"c465e1d35ba88deeabcbb44d314e3d968ac137c77b3b91f2c1758a076ac60ad2"} Dec 06 07:25:59 crc kubenswrapper[4895]: I1206 07:25:59.503791 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31","Type":"ContainerStarted","Data":"41ddddc319b560db6f6bea93456866cc6aae3bfbc40b7414243e2abadd854873"} Dec 06 07:26:00 crc kubenswrapper[4895]: I1206 07:26:00.881783 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mll22"] Dec 06 07:26:00 crc kubenswrapper[4895]: W1206 07:26:00.890324 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ddd748_caf4_44ee_a574_50249d2ac07d.slice/crio-c7a9846586180bb76c9e150e1c84b9715099e765e4c978fc70d45a4403e49f1b WatchSource:0}: Error finding container c7a9846586180bb76c9e150e1c84b9715099e765e4c978fc70d45a4403e49f1b: Status 404 returned error can't find the container with id c7a9846586180bb76c9e150e1c84b9715099e765e4c978fc70d45a4403e49f1b Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.535513 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerStarted","Data":"955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879"} Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.535946 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerStarted","Data":"c7a9846586180bb76c9e150e1c84b9715099e765e4c978fc70d45a4403e49f1b"} Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.539778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31","Type":"ContainerStarted","Data":"fd449be64614c70888b6fa8a0ff14d5780043c2884af3f8f625d345193c1ed59"} Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.541838 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerStarted","Data":"fe9d94a79e0a47d4ffea57f0003e6e12f7992fee3bd5d13db8570d53e0190a82"} Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.543689 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" event={"ID":"494c9693-fb7c-468c-8a34-a6fcfbd35fd7","Type":"ContainerStarted","Data":"82bdfb3f961fdffe26381a5f1e2adb73bd2436e9138bcd3e2b8d88b1ed8a8c66"} Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.575344 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" podStartSLOduration=5.574575774 podStartE2EDuration="5.574575774s" podCreationTimestamp="2025-12-06 07:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:01.561229766 +0000 UTC m=+1723.962618656" watchObservedRunningTime="2025-12-06 07:26:01.574575774 +0000 UTC m=+1723.975964644" Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.738870 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:26:01 crc kubenswrapper[4895]: I1206 07:26:01.796692 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.104960 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.558931 4895 generic.go:334] "Generic (PLEG): container finished" podID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerID="955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879" exitCode=0 Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.560545 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerDied","Data":"955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879"} Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.560679 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api-log" containerID="cri-o://41ddddc319b560db6f6bea93456866cc6aae3bfbc40b7414243e2abadd854873" gracePeriod=30 Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.561734 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.561769 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api" containerID="cri-o://fd449be64614c70888b6fa8a0ff14d5780043c2884af3f8f625d345193c1ed59" gracePeriod=30 Dec 06 07:26:02 crc kubenswrapper[4895]: I1206 07:26:02.588406 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.588384146 podStartE2EDuration="6.588384146s" podCreationTimestamp="2025-12-06 07:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:02.583362971 +0000 UTC m=+1724.984751851" watchObservedRunningTime="2025-12-06 07:26:02.588384146 +0000 UTC m=+1724.989773026" Dec 06 07:26:03 crc kubenswrapper[4895]: I1206 07:26:03.052206 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:26:03 crc kubenswrapper[4895]: E1206 07:26:03.052593 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:26:03 crc kubenswrapper[4895]: I1206 07:26:03.581817 4895 generic.go:334] "Generic (PLEG): container finished" podID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerID="fd449be64614c70888b6fa8a0ff14d5780043c2884af3f8f625d345193c1ed59" exitCode=0 Dec 06 07:26:03 crc kubenswrapper[4895]: I1206 07:26:03.581853 4895 generic.go:334] "Generic (PLEG): container finished" podID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerID="41ddddc319b560db6f6bea93456866cc6aae3bfbc40b7414243e2abadd854873" exitCode=143 Dec 06 07:26:03 crc kubenswrapper[4895]: I1206 07:26:03.582867 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31","Type":"ContainerDied","Data":"fd449be64614c70888b6fa8a0ff14d5780043c2884af3f8f625d345193c1ed59"} Dec 06 07:26:03 crc kubenswrapper[4895]: I1206 07:26:03.582905 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31","Type":"ContainerDied","Data":"41ddddc319b560db6f6bea93456866cc6aae3bfbc40b7414243e2abadd854873"} Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.414740 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.536611 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.537094 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-logs\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.537164 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-combined-ca-bundle\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.537329 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-scripts\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.537372 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2mth\" (UniqueName: \"kubernetes.io/projected/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-kube-api-access-r2mth\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.537405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data-custom\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.537772 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-etc-machine-id\") pod \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\" (UID: \"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31\") " Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.538007 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-logs" (OuterVolumeSpecName: "logs") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.538292 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.538368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.541647 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-scripts" (OuterVolumeSpecName: "scripts") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.545526 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.561998 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-kube-api-access-r2mth" (OuterVolumeSpecName: "kube-api-access-r2mth") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "kube-api-access-r2mth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.569785 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.604892 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31","Type":"ContainerDied","Data":"934bd17c3e86ba7e61c82c222ae4413bf1742e01ba84336e7fa9cbee49f7f2f7"} Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.604955 4895 scope.go:117] "RemoveContainer" containerID="fd449be64614c70888b6fa8a0ff14d5780043c2884af3f8f625d345193c1ed59" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.605179 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.630400 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data" (OuterVolumeSpecName: "config-data") pod "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" (UID: "13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.639538 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.639587 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.639604 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.639614 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.639628 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2mth\" (UniqueName: \"kubernetes.io/projected/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-kube-api-access-r2mth\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.639637 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.685960 4895 scope.go:117] "RemoveContainer" containerID="41ddddc319b560db6f6bea93456866cc6aae3bfbc40b7414243e2abadd854873" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.949585 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.964051 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.981710 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:26:04 crc kubenswrapper[4895]: E1206 07:26:04.982159 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api-log" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.982183 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api-log" Dec 06 07:26:04 crc kubenswrapper[4895]: E1206 07:26:04.982225 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.982236 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.982465 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api-log" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.982505 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" containerName="cinder-api" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.983814 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.987793 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.987947 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.988020 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 06 07:26:04 crc kubenswrapper[4895]: I1206 07:26:04.994375 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.051953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052394 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-logs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052784 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052892 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.052960 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.053027 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmm9\" (UniqueName: \"kubernetes.io/projected/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-kube-api-access-lgmm9\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.155380 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.155917 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.156007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.156078 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgmm9\" (UniqueName: \"kubernetes.io/projected/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-kube-api-access-lgmm9\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.156216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.156295 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.156417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.156441 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-logs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.157003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.157106 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.157763 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-logs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.160407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.161329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.166261 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.175211 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.180341 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.171565 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.193426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgmm9\" (UniqueName: \"kubernetes.io/projected/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-kube-api-access-lgmm9\") pod \"cinder-api-0\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.315708 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:26:05 crc kubenswrapper[4895]: I1206 07:26:05.632424 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698445c967-xk6g2" event={"ID":"46664967-bc44-4dd5-8fa7-419d1f7741fd","Type":"ContainerStarted","Data":"9df6c3348da2931d78ecc446fbd4746175039dc755b08b1089a9a7913d9218b1"} Dec 06 07:26:06 crc kubenswrapper[4895]: I1206 07:26:06.061827 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31" path="/var/lib/kubelet/pods/13f63fc7-ebd6-4e41-ba4f-c5ea93fc9c31/volumes" Dec 06 07:26:06 crc kubenswrapper[4895]: I1206 07:26:06.655279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698445c967-xk6g2" event={"ID":"46664967-bc44-4dd5-8fa7-419d1f7741fd","Type":"ContainerStarted","Data":"b02063e5faf708de34c2034eb08a56c2da17f968cfa6ea952d0d97dba87a4a30"} Dec 06 07:26:06 crc kubenswrapper[4895]: I1206 07:26:06.659402 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64e4925c-937e-47d3-8aab-bfc524875263","Type":"ContainerStarted","Data":"c6c565e6ef537443dd1e30b00338bc8dffca188e5578c660d61309692af2470c"} Dec 06 07:26:06 crc kubenswrapper[4895]: I1206 07:26:06.690896 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-698445c967-xk6g2" podStartSLOduration=3.60095825 podStartE2EDuration="36.69087312s" podCreationTimestamp="2025-12-06 07:25:30 +0000 UTC" firstStartedPulling="2025-12-06 07:25:31.404917682 +0000 UTC m=+1693.806306552" lastFinishedPulling="2025-12-06 07:26:04.494832552 +0000 UTC m=+1726.896221422" observedRunningTime="2025-12-06 07:26:06.673319498 +0000 UTC m=+1729.074708368" watchObservedRunningTime="2025-12-06 07:26:06.69087312 +0000 UTC m=+1729.092261990" Dec 06 07:26:07 crc kubenswrapper[4895]: I1206 07:26:07.106549 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:26:07 crc kubenswrapper[4895]: I1206 07:26:07.172520 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f6d99f797-zszb6"] Dec 06 07:26:07 crc kubenswrapper[4895]: I1206 07:26:07.173135 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" podUID="a53968c6-e872-491e-802f-bf76d49b1126" containerName="dnsmasq-dns" containerID="cri-o://fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae" gracePeriod=10 Dec 06 07:26:07 crc kubenswrapper[4895]: I1206 07:26:07.720769 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.456133 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.546257 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-swift-storage-0\") pod \"a53968c6-e872-491e-802f-bf76d49b1126\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.546334 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-config\") pod \"a53968c6-e872-491e-802f-bf76d49b1126\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.546438 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-svc\") pod \"a53968c6-e872-491e-802f-bf76d49b1126\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.546456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv248\" (UniqueName: \"kubernetes.io/projected/a53968c6-e872-491e-802f-bf76d49b1126-kube-api-access-dv248\") pod \"a53968c6-e872-491e-802f-bf76d49b1126\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.546536 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-nb\") pod \"a53968c6-e872-491e-802f-bf76d49b1126\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.546591 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-sb\") pod \"a53968c6-e872-491e-802f-bf76d49b1126\" (UID: \"a53968c6-e872-491e-802f-bf76d49b1126\") " Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.554619 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53968c6-e872-491e-802f-bf76d49b1126-kube-api-access-dv248" (OuterVolumeSpecName: "kube-api-access-dv248") pod "a53968c6-e872-491e-802f-bf76d49b1126" (UID: "a53968c6-e872-491e-802f-bf76d49b1126"). InnerVolumeSpecName "kube-api-access-dv248". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.649641 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv248\" (UniqueName: \"kubernetes.io/projected/a53968c6-e872-491e-802f-bf76d49b1126-kube-api-access-dv248\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.689346 4895 generic.go:334] "Generic (PLEG): container finished" podID="a53968c6-e872-491e-802f-bf76d49b1126" containerID="fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae" exitCode=0 Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.689419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" event={"ID":"a53968c6-e872-491e-802f-bf76d49b1126","Type":"ContainerDied","Data":"fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae"} Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.689445 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" event={"ID":"a53968c6-e872-491e-802f-bf76d49b1126","Type":"ContainerDied","Data":"672f2b1debd3f15d176af49cdb591f48f4bafb746ddafb7bc4a81220bc1bdb48"} Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.689454 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6d99f797-zszb6" Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.689462 4895 scope.go:117] "RemoveContainer" containerID="fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae" Dec 06 07:26:08 crc kubenswrapper[4895]: I1206 07:26:08.705428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52d7b5bf-eae3-4832-b13b-be5f0734e4bb","Type":"ContainerStarted","Data":"bf66d834e339ee12b0e2afb0490cb60ad70c9104a2ffc19c5925aac61e6835ac"} Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.231324 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a53968c6-e872-491e-802f-bf76d49b1126" (UID: "a53968c6-e872-491e-802f-bf76d49b1126"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.231515 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a53968c6-e872-491e-802f-bf76d49b1126" (UID: "a53968c6-e872-491e-802f-bf76d49b1126"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.231515 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a53968c6-e872-491e-802f-bf76d49b1126" (UID: "a53968c6-e872-491e-802f-bf76d49b1126"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.231538 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-config" (OuterVolumeSpecName: "config") pod "a53968c6-e872-491e-802f-bf76d49b1126" (UID: "a53968c6-e872-491e-802f-bf76d49b1126"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.239323 4895 scope.go:117] "RemoveContainer" containerID="cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.259333 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a53968c6-e872-491e-802f-bf76d49b1126" (UID: "a53968c6-e872-491e-802f-bf76d49b1126"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.260850 4895 scope.go:117] "RemoveContainer" containerID="fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae" Dec 06 07:26:09 crc kubenswrapper[4895]: E1206 07:26:09.261501 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae\": container with ID starting with fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae not found: ID does not exist" containerID="fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261572 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae"} err="failed to get container status \"fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae\": rpc error: code = NotFound desc = could not find container \"fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae\": container with ID starting with fac7bdcf48744c09031f4220e3ad8e731371ab796eab33852a78df133cedfcae not found: ID does not exist" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261623 4895 scope.go:117] "RemoveContainer" containerID="cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261513 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261678 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261692 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261702 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.261713 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53968c6-e872-491e-802f-bf76d49b1126-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:09 crc kubenswrapper[4895]: E1206 07:26:09.262102 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b\": container with ID starting with cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b not found: ID does not exist" containerID="cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.262168 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b"} err="failed to get container status \"cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b\": rpc error: code = NotFound desc = could not find container \"cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b\": container with ID starting with cb563ecf4e6db2a22626a40362a0ffe8a93490846248596c8cd44e9e0f54831b not found: ID does not exist" Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.363810 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f6d99f797-zszb6"] Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.375610 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f6d99f797-zszb6"] Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.718681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerStarted","Data":"85ca400944c87517f5da784ed8da594b1915fac4fd7fefe6d522baa83f6b9b5c"} Dec 06 07:26:09 crc kubenswrapper[4895]: I1206 07:26:09.722040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52d7b5bf-eae3-4832-b13b-be5f0734e4bb","Type":"ContainerStarted","Data":"87e10027541f9994e9a662ec95d1dc56f5cd26543c9ae06d62f955e26582390b"} Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.085742 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53968c6-e872-491e-802f-bf76d49b1126" path="/var/lib/kubelet/pods/a53968c6-e872-491e-802f-bf76d49b1126/volumes" Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.736648 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64e4925c-937e-47d3-8aab-bfc524875263","Type":"ContainerStarted","Data":"b95fa3b7cc14cba53cd87eef4fb1ebfb56ed80632f677713863e5dec554e2139"} Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.746709 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-central-agent" containerID="cri-o://16eefc5d837bb7747b6800d39d9c6c13c4aeda660f37ddea7ac90708f74e6e41" gracePeriod=30 Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.747036 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerStarted","Data":"6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675"} Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.747113 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.747454 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="proxy-httpd" containerID="cri-o://85ca400944c87517f5da784ed8da594b1915fac4fd7fefe6d522baa83f6b9b5c" gracePeriod=30 Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.747540 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="sg-core" containerID="cri-o://fe9d94a79e0a47d4ffea57f0003e6e12f7992fee3bd5d13db8570d53e0190a82" gracePeriod=30 Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.747592 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-notification-agent" containerID="cri-o://6c4be519f3a2c4d38ac94febdc45c7d1d7ff2038f59d6448fc206bd8e235b42d" gracePeriod=30 Dec 06 07:26:10 crc kubenswrapper[4895]: I1206 07:26:10.775769 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.276798543 podStartE2EDuration="24.775749351s" podCreationTimestamp="2025-12-06 07:25:46 +0000 UTC" firstStartedPulling="2025-12-06 07:25:48.588921949 +0000 UTC m=+1710.990310819" lastFinishedPulling="2025-12-06 07:26:08.087872757 +0000 UTC m=+1730.489261627" observedRunningTime="2025-12-06 07:26:10.768172488 +0000 UTC m=+1733.169561348" watchObservedRunningTime="2025-12-06 07:26:10.775749351 +0000 UTC m=+1733.177138221" Dec 06 07:26:11 crc kubenswrapper[4895]: I1206 07:26:11.757814 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3545654-a687-45f8-baf6-d3930df1545f" containerID="fe9d94a79e0a47d4ffea57f0003e6e12f7992fee3bd5d13db8570d53e0190a82" exitCode=2 Dec 06 07:26:11 crc kubenswrapper[4895]: I1206 07:26:11.757900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerDied","Data":"fe9d94a79e0a47d4ffea57f0003e6e12f7992fee3bd5d13db8570d53e0190a82"} Dec 06 07:26:11 crc kubenswrapper[4895]: I1206 07:26:11.760916 4895 generic.go:334] "Generic (PLEG): container finished" podID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerID="6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675" exitCode=0 Dec 06 07:26:11 crc kubenswrapper[4895]: I1206 07:26:11.761000 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerDied","Data":"6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675"} Dec 06 07:26:11 crc kubenswrapper[4895]: I1206 07:26:11.787082 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.911908549 podStartE2EDuration="15.787058195s" podCreationTimestamp="2025-12-06 07:25:56 +0000 UTC" firstStartedPulling="2025-12-06 07:25:57.616712537 +0000 UTC m=+1720.018101407" lastFinishedPulling="2025-12-06 07:26:04.491862183 +0000 UTC m=+1726.893251053" observedRunningTime="2025-12-06 07:26:11.779951644 +0000 UTC m=+1734.181340524" watchObservedRunningTime="2025-12-06 07:26:11.787058195 +0000 UTC m=+1734.188447065" Dec 06 07:26:12 crc kubenswrapper[4895]: I1206 07:26:12.065512 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 07:26:12 crc kubenswrapper[4895]: I1206 07:26:12.308201 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 07:26:12 crc kubenswrapper[4895]: I1206 07:26:12.810988 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:26:13 crc kubenswrapper[4895]: I1206 07:26:13.787816 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3545654-a687-45f8-baf6-d3930df1545f" containerID="85ca400944c87517f5da784ed8da594b1915fac4fd7fefe6d522baa83f6b9b5c" exitCode=0 Dec 06 07:26:13 crc kubenswrapper[4895]: I1206 07:26:13.787883 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerDied","Data":"85ca400944c87517f5da784ed8da594b1915fac4fd7fefe6d522baa83f6b9b5c"} Dec 06 07:26:14 crc kubenswrapper[4895]: I1206 07:26:14.806014 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="cinder-scheduler" containerID="cri-o://c6c565e6ef537443dd1e30b00338bc8dffca188e5578c660d61309692af2470c" gracePeriod=30 Dec 06 07:26:14 crc kubenswrapper[4895]: I1206 07:26:14.806120 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="probe" containerID="cri-o://b95fa3b7cc14cba53cd87eef4fb1ebfb56ed80632f677713863e5dec554e2139" gracePeriod=30 Dec 06 07:26:17 crc kubenswrapper[4895]: I1206 07:26:17.050728 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:26:17 crc kubenswrapper[4895]: E1206 07:26:17.051220 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.662631 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3545654-a687-45f8-baf6-d3930df1545f" containerID="6c4be519f3a2c4d38ac94febdc45c7d1d7ff2038f59d6448fc206bd8e235b42d" exitCode=-1 Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.663169 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3545654-a687-45f8-baf6-d3930df1545f" containerID="16eefc5d837bb7747b6800d39d9c6c13c4aeda660f37ddea7ac90708f74e6e41" exitCode=0 Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.662680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerDied","Data":"6c4be519f3a2c4d38ac94febdc45c7d1d7ff2038f59d6448fc206bd8e235b42d"} Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.663208 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerDied","Data":"16eefc5d837bb7747b6800d39d9c6c13c4aeda660f37ddea7ac90708f74e6e41"} Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.708820 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.819929 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-combined-ca-bundle\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.820032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-config-data\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.820069 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dnmw\" (UniqueName: \"kubernetes.io/projected/d3545654-a687-45f8-baf6-d3930df1545f-kube-api-access-9dnmw\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.820847 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-run-httpd\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.820882 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-sg-core-conf-yaml\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.820941 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-log-httpd\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.821078 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-scripts\") pod \"d3545654-a687-45f8-baf6-d3930df1545f\" (UID: \"d3545654-a687-45f8-baf6-d3930df1545f\") " Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.821145 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.821378 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.821789 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.821916 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3545654-a687-45f8-baf6-d3930df1545f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.827026 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-scripts" (OuterVolumeSpecName: "scripts") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.831812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3545654-a687-45f8-baf6-d3930df1545f-kube-api-access-9dnmw" (OuterVolumeSpecName: "kube-api-access-9dnmw") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "kube-api-access-9dnmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.859612 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.902302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.923364 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dnmw\" (UniqueName: \"kubernetes.io/projected/d3545654-a687-45f8-baf6-d3930df1545f-kube-api-access-9dnmw\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.923640 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.923725 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.923815 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:20 crc kubenswrapper[4895]: I1206 07:26:20.934934 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-config-data" (OuterVolumeSpecName: "config-data") pod "d3545654-a687-45f8-baf6-d3930df1545f" (UID: "d3545654-a687-45f8-baf6-d3930df1545f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.025657 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3545654-a687-45f8-baf6-d3930df1545f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.674036 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.673990 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3545654-a687-45f8-baf6-d3930df1545f","Type":"ContainerDied","Data":"51ce0f7d8c019ce8ee4640c24e5bc358faaaff2c5a5d1d642ebb7611d3982609"} Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.674213 4895 scope.go:117] "RemoveContainer" containerID="85ca400944c87517f5da784ed8da594b1915fac4fd7fefe6d522baa83f6b9b5c" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.677813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52d7b5bf-eae3-4832-b13b-be5f0734e4bb","Type":"ContainerStarted","Data":"32263a683bb152fcade8d6ea711b2647b1cdd056c745a933576b1f81801ceca7"} Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.678936 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.682538 4895 generic.go:334] "Generic (PLEG): container finished" podID="64e4925c-937e-47d3-8aab-bfc524875263" containerID="b95fa3b7cc14cba53cd87eef4fb1ebfb56ed80632f677713863e5dec554e2139" exitCode=0 Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.682607 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64e4925c-937e-47d3-8aab-bfc524875263","Type":"ContainerDied","Data":"b95fa3b7cc14cba53cd87eef4fb1ebfb56ed80632f677713863e5dec554e2139"} Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.702774 4895 scope.go:117] "RemoveContainer" containerID="fe9d94a79e0a47d4ffea57f0003e6e12f7992fee3bd5d13db8570d53e0190a82" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.713832 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=17.713810149 podStartE2EDuration="17.713810149s" podCreationTimestamp="2025-12-06 07:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:21.702787703 +0000 UTC m=+1744.104176573" watchObservedRunningTime="2025-12-06 07:26:21.713810149 +0000 UTC m=+1744.115199019" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.729608 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.735527 4895 scope.go:117] "RemoveContainer" containerID="6c4be519f3a2c4d38ac94febdc45c7d1d7ff2038f59d6448fc206bd8e235b42d" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.739058 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.760620 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:21 crc kubenswrapper[4895]: E1206 07:26:21.761247 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="sg-core" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761275 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="sg-core" Dec 06 07:26:21 crc kubenswrapper[4895]: E1206 07:26:21.761300 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-notification-agent" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761307 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-notification-agent" Dec 06 07:26:21 crc kubenswrapper[4895]: E1206 07:26:21.761323 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53968c6-e872-491e-802f-bf76d49b1126" containerName="dnsmasq-dns" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761329 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53968c6-e872-491e-802f-bf76d49b1126" containerName="dnsmasq-dns" Dec 06 07:26:21 crc kubenswrapper[4895]: E1206 07:26:21.761351 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53968c6-e872-491e-802f-bf76d49b1126" containerName="init" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761357 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53968c6-e872-491e-802f-bf76d49b1126" containerName="init" Dec 06 07:26:21 crc kubenswrapper[4895]: E1206 07:26:21.761368 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-central-agent" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761375 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-central-agent" Dec 06 07:26:21 crc kubenswrapper[4895]: E1206 07:26:21.761405 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="proxy-httpd" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761411 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="proxy-httpd" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761617 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="sg-core" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761661 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-notification-agent" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761680 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="ceilometer-central-agent" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761691 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53968c6-e872-491e-802f-bf76d49b1126" containerName="dnsmasq-dns" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.761706 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3545654-a687-45f8-baf6-d3930df1545f" containerName="proxy-httpd" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.768329 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.771415 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.773002 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.779173 4895 scope.go:117] "RemoveContainer" containerID="16eefc5d837bb7747b6800d39d9c6c13c4aeda660f37ddea7ac90708f74e6e41" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.781623 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941385 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941444 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-config-data\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941522 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941561 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-log-httpd\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941613 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bv5f\" (UniqueName: \"kubernetes.io/projected/86c6eb7b-1a53-4a0a-a777-30d62c457740-kube-api-access-2bv5f\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-run-httpd\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:21 crc kubenswrapper[4895]: I1206 07:26:21.941652 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-scripts\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043099 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-log-httpd\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bv5f\" (UniqueName: \"kubernetes.io/projected/86c6eb7b-1a53-4a0a-a777-30d62c457740-kube-api-access-2bv5f\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043183 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-run-httpd\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043202 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-scripts\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043256 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-config-data\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-run-httpd\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.043790 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-log-httpd\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.048319 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.048715 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-config-data\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.048853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-scripts\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.062587 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bv5f\" (UniqueName: \"kubernetes.io/projected/86c6eb7b-1a53-4a0a-a777-30d62c457740-kube-api-access-2bv5f\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.063401 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3545654-a687-45f8-baf6-d3930df1545f" path="/var/lib/kubelet/pods/d3545654-a687-45f8-baf6-d3930df1545f/volumes" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.063493 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.093987 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.598725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:22 crc kubenswrapper[4895]: W1206 07:26:22.603013 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86c6eb7b_1a53_4a0a_a777_30d62c457740.slice/crio-a69d3dbbde05ffcf816984009b9697aeb4e8ee32716b573bd943e5a755bcc840 WatchSource:0}: Error finding container a69d3dbbde05ffcf816984009b9697aeb4e8ee32716b573bd943e5a755bcc840: Status 404 returned error can't find the container with id a69d3dbbde05ffcf816984009b9697aeb4e8ee32716b573bd943e5a755bcc840 Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.605927 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:26:22 crc kubenswrapper[4895]: I1206 07:26:22.703520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerStarted","Data":"a69d3dbbde05ffcf816984009b9697aeb4e8ee32716b573bd943e5a755bcc840"} Dec 06 07:26:25 crc kubenswrapper[4895]: I1206 07:26:25.734380 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerStarted","Data":"c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585"} Dec 06 07:26:25 crc kubenswrapper[4895]: I1206 07:26:25.755963 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mll22" podStartSLOduration=5.039745425 podStartE2EDuration="27.755943092s" podCreationTimestamp="2025-12-06 07:25:58 +0000 UTC" firstStartedPulling="2025-12-06 07:26:02.563170209 +0000 UTC m=+1724.964559079" lastFinishedPulling="2025-12-06 07:26:25.279367876 +0000 UTC m=+1747.680756746" observedRunningTime="2025-12-06 07:26:25.755777127 +0000 UTC m=+1748.157166017" watchObservedRunningTime="2025-12-06 07:26:25.755943092 +0000 UTC m=+1748.157331962" Dec 06 07:26:25 crc kubenswrapper[4895]: I1206 07:26:25.864535 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.398835 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vphq2"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.400662 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.433290 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vphq2"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.477764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a14678-48ea-424e-9a50-dd28f69b82a3-operator-scripts\") pod \"nova-api-db-create-vphq2\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.477919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7fb\" (UniqueName: \"kubernetes.io/projected/e2a14678-48ea-424e-9a50-dd28f69b82a3-kube-api-access-xx7fb\") pod \"nova-api-db-create-vphq2\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.579642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx7fb\" (UniqueName: \"kubernetes.io/projected/e2a14678-48ea-424e-9a50-dd28f69b82a3-kube-api-access-xx7fb\") pod \"nova-api-db-create-vphq2\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.579779 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a14678-48ea-424e-9a50-dd28f69b82a3-operator-scripts\") pod \"nova-api-db-create-vphq2\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.580627 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a14678-48ea-424e-9a50-dd28f69b82a3-operator-scripts\") pod \"nova-api-db-create-vphq2\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.598141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx7fb\" (UniqueName: \"kubernetes.io/projected/e2a14678-48ea-424e-9a50-dd28f69b82a3-kube-api-access-xx7fb\") pod \"nova-api-db-create-vphq2\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.607063 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8xgdx"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.608187 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.630644 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8xgdx"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.682954 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c958c44b-3580-4d17-9b18-65c93cd7d0bf-operator-scripts\") pod \"nova-cell0-db-create-8xgdx\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.683334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnczk\" (UniqueName: \"kubernetes.io/projected/c958c44b-3580-4d17-9b18-65c93cd7d0bf-kube-api-access-cnczk\") pod \"nova-cell0-db-create-8xgdx\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.701343 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bz7dn"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.702646 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.711714 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f7f2-account-create-update-jvgrz"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.713009 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.721987 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.727522 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.746222 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bz7dn"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.760544 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f7f2-account-create-update-jvgrz"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.785783 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcz5\" (UniqueName: \"kubernetes.io/projected/04872869-17a2-4cb6-9222-3b265dddf350-kube-api-access-vvcz5\") pod \"nova-cell1-db-create-bz7dn\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.785901 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c958c44b-3580-4d17-9b18-65c93cd7d0bf-operator-scripts\") pod \"nova-cell0-db-create-8xgdx\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.785962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnczk\" (UniqueName: \"kubernetes.io/projected/c958c44b-3580-4d17-9b18-65c93cd7d0bf-kube-api-access-cnczk\") pod \"nova-cell0-db-create-8xgdx\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.786016 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04872869-17a2-4cb6-9222-3b265dddf350-operator-scripts\") pod \"nova-cell1-db-create-bz7dn\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.787058 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c958c44b-3580-4d17-9b18-65c93cd7d0bf-operator-scripts\") pod \"nova-cell0-db-create-8xgdx\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.808176 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnczk\" (UniqueName: \"kubernetes.io/projected/c958c44b-3580-4d17-9b18-65c93cd7d0bf-kube-api-access-cnczk\") pod \"nova-cell0-db-create-8xgdx\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.887664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04872869-17a2-4cb6-9222-3b265dddf350-operator-scripts\") pod \"nova-cell1-db-create-bz7dn\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.887763 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5067344e-572d-4b94-af89-552ce31e0f1f-operator-scripts\") pod \"nova-api-f7f2-account-create-update-jvgrz\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.887799 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b5l4\" (UniqueName: \"kubernetes.io/projected/5067344e-572d-4b94-af89-552ce31e0f1f-kube-api-access-8b5l4\") pod \"nova-api-f7f2-account-create-update-jvgrz\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.887822 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcz5\" (UniqueName: \"kubernetes.io/projected/04872869-17a2-4cb6-9222-3b265dddf350-kube-api-access-vvcz5\") pod \"nova-cell1-db-create-bz7dn\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.888671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04872869-17a2-4cb6-9222-3b265dddf350-operator-scripts\") pod \"nova-cell1-db-create-bz7dn\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.913312 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcz5\" (UniqueName: \"kubernetes.io/projected/04872869-17a2-4cb6-9222-3b265dddf350-kube-api-access-vvcz5\") pod \"nova-cell1-db-create-bz7dn\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.919391 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1ea0-account-create-update-mgbzn"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.920843 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.925618 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.960627 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1ea0-account-create-update-mgbzn"] Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.976253 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.996059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5067344e-572d-4b94-af89-552ce31e0f1f-operator-scripts\") pod \"nova-api-f7f2-account-create-update-jvgrz\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.996141 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b5l4\" (UniqueName: \"kubernetes.io/projected/5067344e-572d-4b94-af89-552ce31e0f1f-kube-api-access-8b5l4\") pod \"nova-api-f7f2-account-create-update-jvgrz\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:27 crc kubenswrapper[4895]: I1206 07:26:27.997352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5067344e-572d-4b94-af89-552ce31e0f1f-operator-scripts\") pod \"nova-api-f7f2-account-create-update-jvgrz\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.021978 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.037647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b5l4\" (UniqueName: \"kubernetes.io/projected/5067344e-572d-4b94-af89-552ce31e0f1f-kube-api-access-8b5l4\") pod \"nova-api-f7f2-account-create-update-jvgrz\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.043123 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.071583 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:26:28 crc kubenswrapper[4895]: E1206 07:26:28.071885 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.098662 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclxh\" (UniqueName: \"kubernetes.io/projected/fb0479a8-2861-4d5e-a0ae-e7629dede891-kube-api-access-wclxh\") pod \"nova-cell0-1ea0-account-create-update-mgbzn\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.098781 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb0479a8-2861-4d5e-a0ae-e7629dede891-operator-scripts\") pod \"nova-cell0-1ea0-account-create-update-mgbzn\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.158001 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8afe-account-create-update-6m7xh"] Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.159264 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.169645 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.178762 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8afe-account-create-update-6m7xh"] Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.202829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclxh\" (UniqueName: \"kubernetes.io/projected/fb0479a8-2861-4d5e-a0ae-e7629dede891-kube-api-access-wclxh\") pod \"nova-cell0-1ea0-account-create-update-mgbzn\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.202994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb0479a8-2861-4d5e-a0ae-e7629dede891-operator-scripts\") pod \"nova-cell0-1ea0-account-create-update-mgbzn\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.207564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb0479a8-2861-4d5e-a0ae-e7629dede891-operator-scripts\") pod \"nova-cell0-1ea0-account-create-update-mgbzn\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.269130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclxh\" (UniqueName: \"kubernetes.io/projected/fb0479a8-2861-4d5e-a0ae-e7629dede891-kube-api-access-wclxh\") pod \"nova-cell0-1ea0-account-create-update-mgbzn\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.304317 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db74w\" (UniqueName: \"kubernetes.io/projected/4337be07-550a-4421-917f-5969980e230d-kube-api-access-db74w\") pod \"nova-cell1-8afe-account-create-update-6m7xh\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.304523 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4337be07-550a-4421-917f-5969980e230d-operator-scripts\") pod \"nova-cell1-8afe-account-create-update-6m7xh\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.318341 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.406367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db74w\" (UniqueName: \"kubernetes.io/projected/4337be07-550a-4421-917f-5969980e230d-kube-api-access-db74w\") pod \"nova-cell1-8afe-account-create-update-6m7xh\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.406520 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4337be07-550a-4421-917f-5969980e230d-operator-scripts\") pod \"nova-cell1-8afe-account-create-update-6m7xh\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.407222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4337be07-550a-4421-917f-5969980e230d-operator-scripts\") pod \"nova-cell1-8afe-account-create-update-6m7xh\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.426253 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db74w\" (UniqueName: \"kubernetes.io/projected/4337be07-550a-4421-917f-5969980e230d-kube-api-access-db74w\") pod \"nova-cell1-8afe-account-create-update-6m7xh\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:28 crc kubenswrapper[4895]: I1206 07:26:28.485758 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.130267 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.130825 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.248319 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.477829 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1ea0-account-create-update-mgbzn"] Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.607519 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8afe-account-create-update-6m7xh"] Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.640533 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vphq2"] Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.651095 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bz7dn"] Dec 06 07:26:29 crc kubenswrapper[4895]: W1206 07:26:29.663409 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a14678_48ea_424e_9a50_dd28f69b82a3.slice/crio-cbcf2eccae4a311e715e12d3fda1f4f8bf5cf237ae383f2d760685128c5af884 WatchSource:0}: Error finding container cbcf2eccae4a311e715e12d3fda1f4f8bf5cf237ae383f2d760685128c5af884: Status 404 returned error can't find the container with id cbcf2eccae4a311e715e12d3fda1f4f8bf5cf237ae383f2d760685128c5af884 Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.756095 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8xgdx"] Dec 06 07:26:29 crc kubenswrapper[4895]: W1206 07:26:29.760045 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc958c44b_3580_4d17_9b18_65c93cd7d0bf.slice/crio-c4d9e75da66406d8bde2d2e249492c8609c4ef49e380e3ac61a8fdaf0b8c387c WatchSource:0}: Error finding container c4d9e75da66406d8bde2d2e249492c8609c4ef49e380e3ac61a8fdaf0b8c387c: Status 404 returned error can't find the container with id c4d9e75da66406d8bde2d2e249492c8609c4ef49e380e3ac61a8fdaf0b8c387c Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.798347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerStarted","Data":"29cabefc071a35c0b9ceaafd902fc62e3bcaf585e54b24edc6f4d0761ca0fa68"} Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.798904 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f7f2-account-create-update-jvgrz"] Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.801492 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" event={"ID":"4337be07-550a-4421-917f-5969980e230d","Type":"ContainerStarted","Data":"c494ac25c89885f4c62d14d3ce86626e4ab3a2ae9cabf5cd66e8654248fa9523"} Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.803249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8xgdx" event={"ID":"c958c44b-3580-4d17-9b18-65c93cd7d0bf","Type":"ContainerStarted","Data":"c4d9e75da66406d8bde2d2e249492c8609c4ef49e380e3ac61a8fdaf0b8c387c"} Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.805824 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bz7dn" event={"ID":"04872869-17a2-4cb6-9222-3b265dddf350","Type":"ContainerStarted","Data":"0b04f39b4ca79f8178e6b19db49e05d12803957aa2e649ceca94d02872e1667f"} Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.810198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" event={"ID":"fb0479a8-2861-4d5e-a0ae-e7629dede891","Type":"ContainerStarted","Data":"0dd3919b08cd504fe31d2914a9d8f01aebf2bfb314eed4564522f23bc7caa8a4"} Dec 06 07:26:29 crc kubenswrapper[4895]: I1206 07:26:29.817371 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vphq2" event={"ID":"e2a14678-48ea-424e-9a50-dd28f69b82a3","Type":"ContainerStarted","Data":"cbcf2eccae4a311e715e12d3fda1f4f8bf5cf237ae383f2d760685128c5af884"} Dec 06 07:26:30 crc kubenswrapper[4895]: I1206 07:26:30.795037 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:30 crc kubenswrapper[4895]: I1206 07:26:30.826189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" event={"ID":"fb0479a8-2861-4d5e-a0ae-e7629dede891","Type":"ContainerStarted","Data":"397c7d40c84d73a555670d7ed3f53e1a02ac92739a5de17f75e1fae47255a519"} Dec 06 07:26:30 crc kubenswrapper[4895]: I1206 07:26:30.828507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" event={"ID":"5067344e-572d-4b94-af89-552ce31e0f1f","Type":"ContainerStarted","Data":"6d71ca378a3c1e74af729fd27878d19f294c261d5242585a451075a6c54a103c"} Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.842178 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vphq2" event={"ID":"e2a14678-48ea-424e-9a50-dd28f69b82a3","Type":"ContainerStarted","Data":"24e68144ea0002544c8fad2dbb864f3c9770ec21bd3a4a84fa07c7a03cadecb8"} Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.849114 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" event={"ID":"4337be07-550a-4421-917f-5969980e230d","Type":"ContainerStarted","Data":"206ce64428b3322b057949fbf79e35f8ecf1a3997fd19513309ae7d4151a96b1"} Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.851602 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" event={"ID":"5067344e-572d-4b94-af89-552ce31e0f1f","Type":"ContainerStarted","Data":"a217507f1b7892189cb3a36cc06623b313c5a0733e526b91471aa615aa818384"} Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.853848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8xgdx" event={"ID":"c958c44b-3580-4d17-9b18-65c93cd7d0bf","Type":"ContainerStarted","Data":"bf1680a564e39a2f8114136574b32fb4a7481bf890a4653e7cd26f7fcd065e6b"} Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.855592 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bz7dn" event={"ID":"04872869-17a2-4cb6-9222-3b265dddf350","Type":"ContainerStarted","Data":"3ac20753e54c465eb6b6f8c1c10daf7d4a84a3bfb7673fb4687c12d0f81745ae"} Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.871605 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vphq2" podStartSLOduration=4.8715813489999995 podStartE2EDuration="4.871581349s" podCreationTimestamp="2025-12-06 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:31.858033695 +0000 UTC m=+1754.259422585" watchObservedRunningTime="2025-12-06 07:26:31.871581349 +0000 UTC m=+1754.272970229" Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.890284 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-8xgdx" podStartSLOduration=4.890261591 podStartE2EDuration="4.890261591s" podCreationTimestamp="2025-12-06 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:31.881009293 +0000 UTC m=+1754.282398153" watchObservedRunningTime="2025-12-06 07:26:31.890261591 +0000 UTC m=+1754.291650461" Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.911297 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" podStartSLOduration=4.911223594 podStartE2EDuration="4.911223594s" podCreationTimestamp="2025-12-06 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:31.899321755 +0000 UTC m=+1754.300710635" watchObservedRunningTime="2025-12-06 07:26:31.911223594 +0000 UTC m=+1754.312612464" Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.934435 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" podStartSLOduration=3.934409408 podStartE2EDuration="3.934409408s" podCreationTimestamp="2025-12-06 07:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:31.933794891 +0000 UTC m=+1754.335183781" watchObservedRunningTime="2025-12-06 07:26:31.934409408 +0000 UTC m=+1754.335798278" Dec 06 07:26:31 crc kubenswrapper[4895]: I1206 07:26:31.936940 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" podStartSLOduration=4.936927166 podStartE2EDuration="4.936927166s" podCreationTimestamp="2025-12-06 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:31.920013161 +0000 UTC m=+1754.321402031" watchObservedRunningTime="2025-12-06 07:26:31.936927166 +0000 UTC m=+1754.338316036" Dec 06 07:26:32 crc kubenswrapper[4895]: I1206 07:26:32.002729 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bz7dn" podStartSLOduration=5.002707563 podStartE2EDuration="5.002707563s" podCreationTimestamp="2025-12-06 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:31.997582295 +0000 UTC m=+1754.398971165" watchObservedRunningTime="2025-12-06 07:26:32.002707563 +0000 UTC m=+1754.404096433" Dec 06 07:26:32 crc kubenswrapper[4895]: I1206 07:26:32.865914 4895 generic.go:334] "Generic (PLEG): container finished" podID="04872869-17a2-4cb6-9222-3b265dddf350" containerID="3ac20753e54c465eb6b6f8c1c10daf7d4a84a3bfb7673fb4687c12d0f81745ae" exitCode=0 Dec 06 07:26:32 crc kubenswrapper[4895]: I1206 07:26:32.865982 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bz7dn" event={"ID":"04872869-17a2-4cb6-9222-3b265dddf350","Type":"ContainerDied","Data":"3ac20753e54c465eb6b6f8c1c10daf7d4a84a3bfb7673fb4687c12d0f81745ae"} Dec 06 07:26:32 crc kubenswrapper[4895]: I1206 07:26:32.875574 4895 generic.go:334] "Generic (PLEG): container finished" podID="64e4925c-937e-47d3-8aab-bfc524875263" containerID="c6c565e6ef537443dd1e30b00338bc8dffca188e5578c660d61309692af2470c" exitCode=0 Dec 06 07:26:32 crc kubenswrapper[4895]: I1206 07:26:32.875710 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64e4925c-937e-47d3-8aab-bfc524875263","Type":"ContainerDied","Data":"c6c565e6ef537443dd1e30b00338bc8dffca188e5578c660d61309692af2470c"} Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.276466 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.360525 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvcz5\" (UniqueName: \"kubernetes.io/projected/04872869-17a2-4cb6-9222-3b265dddf350-kube-api-access-vvcz5\") pod \"04872869-17a2-4cb6-9222-3b265dddf350\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.360826 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04872869-17a2-4cb6-9222-3b265dddf350-operator-scripts\") pod \"04872869-17a2-4cb6-9222-3b265dddf350\" (UID: \"04872869-17a2-4cb6-9222-3b265dddf350\") " Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.361978 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04872869-17a2-4cb6-9222-3b265dddf350-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04872869-17a2-4cb6-9222-3b265dddf350" (UID: "04872869-17a2-4cb6-9222-3b265dddf350"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.399734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04872869-17a2-4cb6-9222-3b265dddf350-kube-api-access-vvcz5" (OuterVolumeSpecName: "kube-api-access-vvcz5") pod "04872869-17a2-4cb6-9222-3b265dddf350" (UID: "04872869-17a2-4cb6-9222-3b265dddf350"). InnerVolumeSpecName "kube-api-access-vvcz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.463979 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04872869-17a2-4cb6-9222-3b265dddf350-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.464023 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvcz5\" (UniqueName: \"kubernetes.io/projected/04872869-17a2-4cb6-9222-3b265dddf350-kube-api-access-vvcz5\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.905290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bz7dn" event={"ID":"04872869-17a2-4cb6-9222-3b265dddf350","Type":"ContainerDied","Data":"0b04f39b4ca79f8178e6b19db49e05d12803957aa2e649ceca94d02872e1667f"} Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.905333 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b04f39b4ca79f8178e6b19db49e05d12803957aa2e649ceca94d02872e1667f" Dec 06 07:26:35 crc kubenswrapper[4895]: I1206 07:26:35.905410 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bz7dn" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.146281 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.202728 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45qb8\" (UniqueName: \"kubernetes.io/projected/64e4925c-937e-47d3-8aab-bfc524875263-kube-api-access-45qb8\") pod \"64e4925c-937e-47d3-8aab-bfc524875263\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.202826 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data-custom\") pod \"64e4925c-937e-47d3-8aab-bfc524875263\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.202978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data\") pod \"64e4925c-937e-47d3-8aab-bfc524875263\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.203017 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64e4925c-937e-47d3-8aab-bfc524875263-etc-machine-id\") pod \"64e4925c-937e-47d3-8aab-bfc524875263\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.203125 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-scripts\") pod \"64e4925c-937e-47d3-8aab-bfc524875263\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.203187 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-combined-ca-bundle\") pod \"64e4925c-937e-47d3-8aab-bfc524875263\" (UID: \"64e4925c-937e-47d3-8aab-bfc524875263\") " Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.205929 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64e4925c-937e-47d3-8aab-bfc524875263-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "64e4925c-937e-47d3-8aab-bfc524875263" (UID: "64e4925c-937e-47d3-8aab-bfc524875263"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.212737 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e4925c-937e-47d3-8aab-bfc524875263-kube-api-access-45qb8" (OuterVolumeSpecName: "kube-api-access-45qb8") pod "64e4925c-937e-47d3-8aab-bfc524875263" (UID: "64e4925c-937e-47d3-8aab-bfc524875263"). InnerVolumeSpecName "kube-api-access-45qb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.214577 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64e4925c-937e-47d3-8aab-bfc524875263" (UID: "64e4925c-937e-47d3-8aab-bfc524875263"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.231704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-scripts" (OuterVolumeSpecName: "scripts") pod "64e4925c-937e-47d3-8aab-bfc524875263" (UID: "64e4925c-937e-47d3-8aab-bfc524875263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.270358 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64e4925c-937e-47d3-8aab-bfc524875263" (UID: "64e4925c-937e-47d3-8aab-bfc524875263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.305677 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.305714 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45qb8\" (UniqueName: \"kubernetes.io/projected/64e4925c-937e-47d3-8aab-bfc524875263-kube-api-access-45qb8\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.305730 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.305742 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64e4925c-937e-47d3-8aab-bfc524875263-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.305754 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.330656 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data" (OuterVolumeSpecName: "config-data") pod "64e4925c-937e-47d3-8aab-bfc524875263" (UID: "64e4925c-937e-47d3-8aab-bfc524875263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.408422 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e4925c-937e-47d3-8aab-bfc524875263-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.932407 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64e4925c-937e-47d3-8aab-bfc524875263","Type":"ContainerDied","Data":"0520cd2f7198765f1dfb7d6aa9e5355c5ff72ad7e8f416de075f3d895de97032"} Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.932916 4895 scope.go:117] "RemoveContainer" containerID="b95fa3b7cc14cba53cd87eef4fb1ebfb56ed80632f677713863e5dec554e2139" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.932448 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.934879 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerStarted","Data":"6a731fb3438a9ed734147491ca30a00e90b25d13261013d5b2e44ceffa056e6f"} Dec 06 07:26:37 crc kubenswrapper[4895]: I1206 07:26:37.960946 4895 scope.go:117] "RemoveContainer" containerID="c6c565e6ef537443dd1e30b00338bc8dffca188e5578c660d61309692af2470c" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:37.999986 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.014296 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.037831 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:26:38 crc kubenswrapper[4895]: E1206 07:26:38.038500 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04872869-17a2-4cb6-9222-3b265dddf350" containerName="mariadb-database-create" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.038610 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="04872869-17a2-4cb6-9222-3b265dddf350" containerName="mariadb-database-create" Dec 06 07:26:38 crc kubenswrapper[4895]: E1206 07:26:38.038685 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="cinder-scheduler" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.038740 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="cinder-scheduler" Dec 06 07:26:38 crc kubenswrapper[4895]: E1206 07:26:38.038795 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="probe" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.038842 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="probe" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.039063 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="probe" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.039128 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e4925c-937e-47d3-8aab-bfc524875263" containerName="cinder-scheduler" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.039193 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="04872869-17a2-4cb6-9222-3b265dddf350" containerName="mariadb-database-create" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.040218 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.043301 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.078924 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e4925c-937e-47d3-8aab-bfc524875263" path="/var/lib/kubelet/pods/64e4925c-937e-47d3-8aab-bfc524875263/volumes" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.079656 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.123721 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-scripts\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.123802 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.123936 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.123967 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmvs\" (UniqueName: \"kubernetes.io/projected/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-kube-api-access-csmvs\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.124405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.124883 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.226883 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.226940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.226992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-scripts\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.227015 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.227050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.227078 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmvs\" (UniqueName: \"kubernetes.io/projected/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-kube-api-access-csmvs\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.227568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.232042 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.232504 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-scripts\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.232697 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.234161 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.250282 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmvs\" (UniqueName: \"kubernetes.io/projected/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-kube-api-access-csmvs\") pod \"cinder-scheduler-0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.390203 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:26:38 crc kubenswrapper[4895]: I1206 07:26:38.975022 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:26:38 crc kubenswrapper[4895]: W1206 07:26:38.979221 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8969e2c_9cc0_40a6_8fee_65d93a9856b0.slice/crio-2534378b684637aebc6ff80110820bf74c2b365e2603446c2429642cda86afa4 WatchSource:0}: Error finding container 2534378b684637aebc6ff80110820bf74c2b365e2603446c2429642cda86afa4: Status 404 returned error can't find the container with id 2534378b684637aebc6ff80110820bf74c2b365e2603446c2429642cda86afa4 Dec 06 07:26:39 crc kubenswrapper[4895]: I1206 07:26:39.182409 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:26:39 crc kubenswrapper[4895]: I1206 07:26:39.237556 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mll22"] Dec 06 07:26:39 crc kubenswrapper[4895]: I1206 07:26:39.955825 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8969e2c-9cc0-40a6-8fee-65d93a9856b0","Type":"ContainerStarted","Data":"2534378b684637aebc6ff80110820bf74c2b365e2603446c2429642cda86afa4"} Dec 06 07:26:39 crc kubenswrapper[4895]: I1206 07:26:39.956017 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mll22" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="registry-server" containerID="cri-o://c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585" gracePeriod=2 Dec 06 07:26:40 crc kubenswrapper[4895]: I1206 07:26:40.967887 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8969e2c-9cc0-40a6-8fee-65d93a9856b0","Type":"ContainerStarted","Data":"81a5ab0803db27cf4248b24bac25718805b76eff190f565fd41b120d159881aa"} Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.051236 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:26:41 crc kubenswrapper[4895]: E1206 07:26:41.051969 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.941905 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.994679 4895 generic.go:334] "Generic (PLEG): container finished" podID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerID="c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585" exitCode=0 Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.994719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerDied","Data":"c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585"} Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.994746 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mll22" event={"ID":"15ddd748-caf4-44ee-a574-50249d2ac07d","Type":"ContainerDied","Data":"c7a9846586180bb76c9e150e1c84b9715099e765e4c978fc70d45a4403e49f1b"} Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.994762 4895 scope.go:117] "RemoveContainer" containerID="c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585" Dec 06 07:26:41 crc kubenswrapper[4895]: I1206 07:26:41.994896 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mll22" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.027292 4895 scope.go:117] "RemoveContainer" containerID="6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.046411 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsnq5\" (UniqueName: \"kubernetes.io/projected/15ddd748-caf4-44ee-a574-50249d2ac07d-kube-api-access-fsnq5\") pod \"15ddd748-caf4-44ee-a574-50249d2ac07d\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.046742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-catalog-content\") pod \"15ddd748-caf4-44ee-a574-50249d2ac07d\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.047003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-utilities\") pod \"15ddd748-caf4-44ee-a574-50249d2ac07d\" (UID: \"15ddd748-caf4-44ee-a574-50249d2ac07d\") " Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.047996 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-utilities" (OuterVolumeSpecName: "utilities") pod "15ddd748-caf4-44ee-a574-50249d2ac07d" (UID: "15ddd748-caf4-44ee-a574-50249d2ac07d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.053743 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ddd748-caf4-44ee-a574-50249d2ac07d-kube-api-access-fsnq5" (OuterVolumeSpecName: "kube-api-access-fsnq5") pod "15ddd748-caf4-44ee-a574-50249d2ac07d" (UID: "15ddd748-caf4-44ee-a574-50249d2ac07d"). InnerVolumeSpecName "kube-api-access-fsnq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.054199 4895 scope.go:117] "RemoveContainer" containerID="955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.075779 4895 scope.go:117] "RemoveContainer" containerID="c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585" Dec 06 07:26:42 crc kubenswrapper[4895]: E1206 07:26:42.076214 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585\": container with ID starting with c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585 not found: ID does not exist" containerID="c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.076245 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585"} err="failed to get container status \"c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585\": rpc error: code = NotFound desc = could not find container \"c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585\": container with ID starting with c4e6400012fd41c000a1a000d17b60f3ab22d2556dad691bfea0f6714b729585 not found: ID does not exist" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.076266 4895 scope.go:117] "RemoveContainer" containerID="6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675" Dec 06 07:26:42 crc kubenswrapper[4895]: E1206 07:26:42.076481 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675\": container with ID starting with 6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675 not found: ID does not exist" containerID="6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.076509 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675"} err="failed to get container status \"6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675\": rpc error: code = NotFound desc = could not find container \"6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675\": container with ID starting with 6425a64adedc2bf8823128f0015852f900441d87563bb74e30c96e5d5329a675 not found: ID does not exist" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.076527 4895 scope.go:117] "RemoveContainer" containerID="955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879" Dec 06 07:26:42 crc kubenswrapper[4895]: E1206 07:26:42.076768 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879\": container with ID starting with 955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879 not found: ID does not exist" containerID="955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.076790 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879"} err="failed to get container status \"955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879\": rpc error: code = NotFound desc = could not find container \"955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879\": container with ID starting with 955717330e0497b0a0471b98a63f4d18d0de081897d8774d5532b38850410879 not found: ID does not exist" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.096904 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15ddd748-caf4-44ee-a574-50249d2ac07d" (UID: "15ddd748-caf4-44ee-a574-50249d2ac07d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.148994 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.149043 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsnq5\" (UniqueName: \"kubernetes.io/projected/15ddd748-caf4-44ee-a574-50249d2ac07d-kube-api-access-fsnq5\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.149060 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ddd748-caf4-44ee-a574-50249d2ac07d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.340382 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mll22"] Dec 06 07:26:42 crc kubenswrapper[4895]: I1206 07:26:42.350037 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mll22"] Dec 06 07:26:43 crc kubenswrapper[4895]: I1206 07:26:43.009365 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8969e2c-9cc0-40a6-8fee-65d93a9856b0","Type":"ContainerStarted","Data":"f8e6a3efd3e56d84034e7d038c15ea6608ff8bc466f71507822f323746904eb8"} Dec 06 07:26:43 crc kubenswrapper[4895]: I1206 07:26:43.042460 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.042440371 podStartE2EDuration="6.042440371s" podCreationTimestamp="2025-12-06 07:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:26:43.035391622 +0000 UTC m=+1765.436780502" watchObservedRunningTime="2025-12-06 07:26:43.042440371 +0000 UTC m=+1765.443829241" Dec 06 07:26:43 crc kubenswrapper[4895]: I1206 07:26:43.393273 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.024255 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2a14678-48ea-424e-9a50-dd28f69b82a3" containerID="24e68144ea0002544c8fad2dbb864f3c9770ec21bd3a4a84fa07c7a03cadecb8" exitCode=0 Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.024642 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vphq2" event={"ID":"e2a14678-48ea-424e-9a50-dd28f69b82a3","Type":"ContainerDied","Data":"24e68144ea0002544c8fad2dbb864f3c9770ec21bd3a4a84fa07c7a03cadecb8"} Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.031230 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerStarted","Data":"a8b091a9ee2f1378e40b1ab92fd5fe11031bed00a979acd5b0c2faef04191c08"} Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.035291 4895 generic.go:334] "Generic (PLEG): container finished" podID="4337be07-550a-4421-917f-5969980e230d" containerID="206ce64428b3322b057949fbf79e35f8ecf1a3997fd19513309ae7d4151a96b1" exitCode=0 Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.035444 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" event={"ID":"4337be07-550a-4421-917f-5969980e230d","Type":"ContainerDied","Data":"206ce64428b3322b057949fbf79e35f8ecf1a3997fd19513309ae7d4151a96b1"} Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.038277 4895 generic.go:334] "Generic (PLEG): container finished" podID="c958c44b-3580-4d17-9b18-65c93cd7d0bf" containerID="bf1680a564e39a2f8114136574b32fb4a7481bf890a4653e7cd26f7fcd065e6b" exitCode=0 Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.038463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8xgdx" event={"ID":"c958c44b-3580-4d17-9b18-65c93cd7d0bf","Type":"ContainerDied","Data":"bf1680a564e39a2f8114136574b32fb4a7481bf890a4653e7cd26f7fcd065e6b"} Dec 06 07:26:44 crc kubenswrapper[4895]: I1206 07:26:44.072081 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" path="/var/lib/kubelet/pods/15ddd748-caf4-44ee-a574-50249d2ac07d/volumes" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.052981 4895 generic.go:334] "Generic (PLEG): container finished" podID="fb0479a8-2861-4d5e-a0ae-e7629dede891" containerID="397c7d40c84d73a555670d7ed3f53e1a02ac92739a5de17f75e1fae47255a519" exitCode=0 Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.053074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" event={"ID":"fb0479a8-2861-4d5e-a0ae-e7629dede891","Type":"ContainerDied","Data":"397c7d40c84d73a555670d7ed3f53e1a02ac92739a5de17f75e1fae47255a519"} Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.057762 4895 generic.go:334] "Generic (PLEG): container finished" podID="5067344e-572d-4b94-af89-552ce31e0f1f" containerID="a217507f1b7892189cb3a36cc06623b313c5a0733e526b91471aa615aa818384" exitCode=0 Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.057976 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" event={"ID":"5067344e-572d-4b94-af89-552ce31e0f1f","Type":"ContainerDied","Data":"a217507f1b7892189cb3a36cc06623b313c5a0733e526b91471aa615aa818384"} Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.523549 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.611407 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.631389 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.690457 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx7fb\" (UniqueName: \"kubernetes.io/projected/e2a14678-48ea-424e-9a50-dd28f69b82a3-kube-api-access-xx7fb\") pod \"e2a14678-48ea-424e-9a50-dd28f69b82a3\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.690613 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a14678-48ea-424e-9a50-dd28f69b82a3-operator-scripts\") pod \"e2a14678-48ea-424e-9a50-dd28f69b82a3\" (UID: \"e2a14678-48ea-424e-9a50-dd28f69b82a3\") " Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.691385 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a14678-48ea-424e-9a50-dd28f69b82a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2a14678-48ea-424e-9a50-dd28f69b82a3" (UID: "e2a14678-48ea-424e-9a50-dd28f69b82a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.695889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a14678-48ea-424e-9a50-dd28f69b82a3-kube-api-access-xx7fb" (OuterVolumeSpecName: "kube-api-access-xx7fb") pod "e2a14678-48ea-424e-9a50-dd28f69b82a3" (UID: "e2a14678-48ea-424e-9a50-dd28f69b82a3"). InnerVolumeSpecName "kube-api-access-xx7fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.792570 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db74w\" (UniqueName: \"kubernetes.io/projected/4337be07-550a-4421-917f-5969980e230d-kube-api-access-db74w\") pod \"4337be07-550a-4421-917f-5969980e230d\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.792653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnczk\" (UniqueName: \"kubernetes.io/projected/c958c44b-3580-4d17-9b18-65c93cd7d0bf-kube-api-access-cnczk\") pod \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.792821 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4337be07-550a-4421-917f-5969980e230d-operator-scripts\") pod \"4337be07-550a-4421-917f-5969980e230d\" (UID: \"4337be07-550a-4421-917f-5969980e230d\") " Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.792956 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c958c44b-3580-4d17-9b18-65c93cd7d0bf-operator-scripts\") pod \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\" (UID: \"c958c44b-3580-4d17-9b18-65c93cd7d0bf\") " Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.793609 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx7fb\" (UniqueName: \"kubernetes.io/projected/e2a14678-48ea-424e-9a50-dd28f69b82a3-kube-api-access-xx7fb\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.793655 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a14678-48ea-424e-9a50-dd28f69b82a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.793605 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c958c44b-3580-4d17-9b18-65c93cd7d0bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c958c44b-3580-4d17-9b18-65c93cd7d0bf" (UID: "c958c44b-3580-4d17-9b18-65c93cd7d0bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.793706 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4337be07-550a-4421-917f-5969980e230d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4337be07-550a-4421-917f-5969980e230d" (UID: "4337be07-550a-4421-917f-5969980e230d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.795866 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4337be07-550a-4421-917f-5969980e230d-kube-api-access-db74w" (OuterVolumeSpecName: "kube-api-access-db74w") pod "4337be07-550a-4421-917f-5969980e230d" (UID: "4337be07-550a-4421-917f-5969980e230d"). InnerVolumeSpecName "kube-api-access-db74w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.796244 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c958c44b-3580-4d17-9b18-65c93cd7d0bf-kube-api-access-cnczk" (OuterVolumeSpecName: "kube-api-access-cnczk") pod "c958c44b-3580-4d17-9b18-65c93cd7d0bf" (UID: "c958c44b-3580-4d17-9b18-65c93cd7d0bf"). InnerVolumeSpecName "kube-api-access-cnczk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.894945 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db74w\" (UniqueName: \"kubernetes.io/projected/4337be07-550a-4421-917f-5969980e230d-kube-api-access-db74w\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.895020 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnczk\" (UniqueName: \"kubernetes.io/projected/c958c44b-3580-4d17-9b18-65c93cd7d0bf-kube-api-access-cnczk\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.895060 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4337be07-550a-4421-917f-5969980e230d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:45 crc kubenswrapper[4895]: I1206 07:26:45.895076 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c958c44b-3580-4d17-9b18-65c93cd7d0bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.078841 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.078841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8afe-account-create-update-6m7xh" event={"ID":"4337be07-550a-4421-917f-5969980e230d","Type":"ContainerDied","Data":"c494ac25c89885f4c62d14d3ce86626e4ab3a2ae9cabf5cd66e8654248fa9523"} Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.078969 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c494ac25c89885f4c62d14d3ce86626e4ab3a2ae9cabf5cd66e8654248fa9523" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.081560 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8xgdx" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.081557 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8xgdx" event={"ID":"c958c44b-3580-4d17-9b18-65c93cd7d0bf","Type":"ContainerDied","Data":"c4d9e75da66406d8bde2d2e249492c8609c4ef49e380e3ac61a8fdaf0b8c387c"} Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.081928 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d9e75da66406d8bde2d2e249492c8609c4ef49e380e3ac61a8fdaf0b8c387c" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.083643 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vphq2" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.083669 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vphq2" event={"ID":"e2a14678-48ea-424e-9a50-dd28f69b82a3","Type":"ContainerDied","Data":"cbcf2eccae4a311e715e12d3fda1f4f8bf5cf237ae383f2d760685128c5af884"} Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.083699 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbcf2eccae4a311e715e12d3fda1f4f8bf5cf237ae383f2d760685128c5af884" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.550107 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.556044 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.717564 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wclxh\" (UniqueName: \"kubernetes.io/projected/fb0479a8-2861-4d5e-a0ae-e7629dede891-kube-api-access-wclxh\") pod \"fb0479a8-2861-4d5e-a0ae-e7629dede891\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.717681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb0479a8-2861-4d5e-a0ae-e7629dede891-operator-scripts\") pod \"fb0479a8-2861-4d5e-a0ae-e7629dede891\" (UID: \"fb0479a8-2861-4d5e-a0ae-e7629dede891\") " Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.717808 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b5l4\" (UniqueName: \"kubernetes.io/projected/5067344e-572d-4b94-af89-552ce31e0f1f-kube-api-access-8b5l4\") pod \"5067344e-572d-4b94-af89-552ce31e0f1f\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.717937 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5067344e-572d-4b94-af89-552ce31e0f1f-operator-scripts\") pod \"5067344e-572d-4b94-af89-552ce31e0f1f\" (UID: \"5067344e-572d-4b94-af89-552ce31e0f1f\") " Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.718648 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0479a8-2861-4d5e-a0ae-e7629dede891-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb0479a8-2861-4d5e-a0ae-e7629dede891" (UID: "fb0479a8-2861-4d5e-a0ae-e7629dede891"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.718649 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5067344e-572d-4b94-af89-552ce31e0f1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5067344e-572d-4b94-af89-552ce31e0f1f" (UID: "5067344e-572d-4b94-af89-552ce31e0f1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.723080 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5067344e-572d-4b94-af89-552ce31e0f1f-kube-api-access-8b5l4" (OuterVolumeSpecName: "kube-api-access-8b5l4") pod "5067344e-572d-4b94-af89-552ce31e0f1f" (UID: "5067344e-572d-4b94-af89-552ce31e0f1f"). InnerVolumeSpecName "kube-api-access-8b5l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.723753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0479a8-2861-4d5e-a0ae-e7629dede891-kube-api-access-wclxh" (OuterVolumeSpecName: "kube-api-access-wclxh") pod "fb0479a8-2861-4d5e-a0ae-e7629dede891" (UID: "fb0479a8-2861-4d5e-a0ae-e7629dede891"). InnerVolumeSpecName "kube-api-access-wclxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.820539 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b5l4\" (UniqueName: \"kubernetes.io/projected/5067344e-572d-4b94-af89-552ce31e0f1f-kube-api-access-8b5l4\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.820579 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5067344e-572d-4b94-af89-552ce31e0f1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.820593 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wclxh\" (UniqueName: \"kubernetes.io/projected/fb0479a8-2861-4d5e-a0ae-e7629dede891-kube-api-access-wclxh\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:46 crc kubenswrapper[4895]: I1206 07:26:46.820606 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb0479a8-2861-4d5e-a0ae-e7629dede891-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:47 crc kubenswrapper[4895]: I1206 07:26:47.092912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" event={"ID":"fb0479a8-2861-4d5e-a0ae-e7629dede891","Type":"ContainerDied","Data":"0dd3919b08cd504fe31d2914a9d8f01aebf2bfb314eed4564522f23bc7caa8a4"} Dec 06 07:26:47 crc kubenswrapper[4895]: I1206 07:26:47.093724 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd3919b08cd504fe31d2914a9d8f01aebf2bfb314eed4564522f23bc7caa8a4" Dec 06 07:26:47 crc kubenswrapper[4895]: I1206 07:26:47.093817 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1ea0-account-create-update-mgbzn" Dec 06 07:26:47 crc kubenswrapper[4895]: I1206 07:26:47.095310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" event={"ID":"5067344e-572d-4b94-af89-552ce31e0f1f","Type":"ContainerDied","Data":"6d71ca378a3c1e74af729fd27878d19f294c261d5242585a451075a6c54a103c"} Dec 06 07:26:47 crc kubenswrapper[4895]: I1206 07:26:47.095336 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f2-account-create-update-jvgrz" Dec 06 07:26:47 crc kubenswrapper[4895]: I1206 07:26:47.095353 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d71ca378a3c1e74af729fd27878d19f294c261d5242585a451075a6c54a103c" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.269190 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zt8v2"] Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.270940 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a14678-48ea-424e-9a50-dd28f69b82a3" containerName="mariadb-database-create" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.271043 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a14678-48ea-424e-9a50-dd28f69b82a3" containerName="mariadb-database-create" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.271122 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c958c44b-3580-4d17-9b18-65c93cd7d0bf" containerName="mariadb-database-create" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.271193 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c958c44b-3580-4d17-9b18-65c93cd7d0bf" containerName="mariadb-database-create" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.271267 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5067344e-572d-4b94-af89-552ce31e0f1f" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.271368 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5067344e-572d-4b94-af89-552ce31e0f1f" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.271463 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="extract-utilities" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.271567 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="extract-utilities" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.271647 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="registry-server" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.271720 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="registry-server" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.271796 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="extract-content" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.271867 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="extract-content" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.271938 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4337be07-550a-4421-917f-5969980e230d" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272015 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4337be07-550a-4421-917f-5969980e230d" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: E1206 07:26:48.272106 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0479a8-2861-4d5e-a0ae-e7629dede891" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272180 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0479a8-2861-4d5e-a0ae-e7629dede891" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272525 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a14678-48ea-424e-9a50-dd28f69b82a3" containerName="mariadb-database-create" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272613 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5067344e-572d-4b94-af89-552ce31e0f1f" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272700 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ddd748-caf4-44ee-a574-50249d2ac07d" containerName="registry-server" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272778 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4337be07-550a-4421-917f-5969980e230d" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272880 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0479a8-2861-4d5e-a0ae-e7629dede891" containerName="mariadb-account-create-update" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.272960 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c958c44b-3580-4d17-9b18-65c93cd7d0bf" containerName="mariadb-database-create" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.274814 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.277826 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hwxtr" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.279409 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.280861 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.281207 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zt8v2"] Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.347005 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-scripts\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.347159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-config-data\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.347207 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.347353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqx5\" (UniqueName: \"kubernetes.io/projected/51d931e2-40e6-4bb5-8b4f-3252852effd0-kube-api-access-tkqx5\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.448678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-config-data\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.448741 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.448867 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqx5\" (UniqueName: \"kubernetes.io/projected/51d931e2-40e6-4bb5-8b4f-3252852effd0-kube-api-access-tkqx5\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.448917 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-scripts\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.455764 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-scripts\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.455854 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-config-data\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.460167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.478040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqx5\" (UniqueName: \"kubernetes.io/projected/51d931e2-40e6-4bb5-8b4f-3252852effd0-kube-api-access-tkqx5\") pod \"nova-cell0-conductor-db-sync-zt8v2\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.602415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:26:48 crc kubenswrapper[4895]: I1206 07:26:48.908608 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 07:26:49 crc kubenswrapper[4895]: I1206 07:26:49.069758 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zt8v2"] Dec 06 07:26:49 crc kubenswrapper[4895]: W1206 07:26:49.079436 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51d931e2_40e6_4bb5_8b4f_3252852effd0.slice/crio-4b2d6a27399b97948bb573591d577658044780c2466723964982a36b4116ae17 WatchSource:0}: Error finding container 4b2d6a27399b97948bb573591d577658044780c2466723964982a36b4116ae17: Status 404 returned error can't find the container with id 4b2d6a27399b97948bb573591d577658044780c2466723964982a36b4116ae17 Dec 06 07:26:49 crc kubenswrapper[4895]: I1206 07:26:49.120432 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" event={"ID":"51d931e2-40e6-4bb5-8b4f-3252852effd0","Type":"ContainerStarted","Data":"4b2d6a27399b97948bb573591d577658044780c2466723964982a36b4116ae17"} Dec 06 07:26:51 crc kubenswrapper[4895]: I1206 07:26:51.179073 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerStarted","Data":"acfca2a6603e1a4abe099e36a0382333ac6834dfa0cd24f36c62b4c968c2c843"} Dec 06 07:26:51 crc kubenswrapper[4895]: I1206 07:26:51.179708 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:26:51 crc kubenswrapper[4895]: I1206 07:26:51.179296 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="proxy-httpd" containerID="cri-o://acfca2a6603e1a4abe099e36a0382333ac6834dfa0cd24f36c62b4c968c2c843" gracePeriod=30 Dec 06 07:26:51 crc kubenswrapper[4895]: I1206 07:26:51.179260 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-central-agent" containerID="cri-o://29cabefc071a35c0b9ceaafd902fc62e3bcaf585e54b24edc6f4d0761ca0fa68" gracePeriod=30 Dec 06 07:26:51 crc kubenswrapper[4895]: I1206 07:26:51.179350 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-notification-agent" containerID="cri-o://6a731fb3438a9ed734147491ca30a00e90b25d13261013d5b2e44ceffa056e6f" gracePeriod=30 Dec 06 07:26:51 crc kubenswrapper[4895]: I1206 07:26:51.179347 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="sg-core" containerID="cri-o://a8b091a9ee2f1378e40b1ab92fd5fe11031bed00a979acd5b0c2faef04191c08" gracePeriod=30 Dec 06 07:26:52 crc kubenswrapper[4895]: I1206 07:26:52.198698 4895 generic.go:334] "Generic (PLEG): container finished" podID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerID="acfca2a6603e1a4abe099e36a0382333ac6834dfa0cd24f36c62b4c968c2c843" exitCode=0 Dec 06 07:26:52 crc kubenswrapper[4895]: I1206 07:26:52.199092 4895 generic.go:334] "Generic (PLEG): container finished" podID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerID="a8b091a9ee2f1378e40b1ab92fd5fe11031bed00a979acd5b0c2faef04191c08" exitCode=2 Dec 06 07:26:52 crc kubenswrapper[4895]: I1206 07:26:52.199106 4895 generic.go:334] "Generic (PLEG): container finished" podID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerID="29cabefc071a35c0b9ceaafd902fc62e3bcaf585e54b24edc6f4d0761ca0fa68" exitCode=0 Dec 06 07:26:52 crc kubenswrapper[4895]: I1206 07:26:52.198774 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerDied","Data":"acfca2a6603e1a4abe099e36a0382333ac6834dfa0cd24f36c62b4c968c2c843"} Dec 06 07:26:52 crc kubenswrapper[4895]: I1206 07:26:52.199147 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerDied","Data":"a8b091a9ee2f1378e40b1ab92fd5fe11031bed00a979acd5b0c2faef04191c08"} Dec 06 07:26:52 crc kubenswrapper[4895]: I1206 07:26:52.199168 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerDied","Data":"29cabefc071a35c0b9ceaafd902fc62e3bcaf585e54b24edc6f4d0761ca0fa68"} Dec 06 07:26:53 crc kubenswrapper[4895]: I1206 07:26:53.051373 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:26:53 crc kubenswrapper[4895]: E1206 07:26:53.051953 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.279027 4895 generic.go:334] "Generic (PLEG): container finished" podID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerID="6a731fb3438a9ed734147491ca30a00e90b25d13261013d5b2e44ceffa056e6f" exitCode=0 Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.279114 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerDied","Data":"6a731fb3438a9ed734147491ca30a00e90b25d13261013d5b2e44ceffa056e6f"} Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.714956 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.731657 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.731771 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bv5f\" (UniqueName: \"kubernetes.io/projected/86c6eb7b-1a53-4a0a-a777-30d62c457740-kube-api-access-2bv5f\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.731812 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-config-data\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.731876 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-sg-core-conf-yaml\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.731923 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-log-httpd\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.731977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-run-httpd\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.732046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-scripts\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.732561 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.732719 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.732775 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.738450 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-scripts" (OuterVolumeSpecName: "scripts") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.738625 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c6eb7b-1a53-4a0a-a777-30d62c457740-kube-api-access-2bv5f" (OuterVolumeSpecName: "kube-api-access-2bv5f") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "kube-api-access-2bv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.773785 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.834190 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.834382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle\") pod \"86c6eb7b-1a53-4a0a-a777-30d62c457740\" (UID: \"86c6eb7b-1a53-4a0a-a777-30d62c457740\") " Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.834922 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.834938 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bv5f\" (UniqueName: \"kubernetes.io/projected/86c6eb7b-1a53-4a0a-a777-30d62c457740-kube-api-access-2bv5f\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.834947 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.834956 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86c6eb7b-1a53-4a0a-a777-30d62c457740-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:56 crc kubenswrapper[4895]: W1206 07:26:56.835186 4895 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/86c6eb7b-1a53-4a0a-a777-30d62c457740/volumes/kubernetes.io~secret/combined-ca-bundle Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.835202 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.864234 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-config-data" (OuterVolumeSpecName: "config-data") pod "86c6eb7b-1a53-4a0a-a777-30d62c457740" (UID: "86c6eb7b-1a53-4a0a-a777-30d62c457740"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.936938 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:56 crc kubenswrapper[4895]: I1206 07:26:56.936992 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c6eb7b-1a53-4a0a-a777-30d62c457740-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.299022 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" event={"ID":"51d931e2-40e6-4bb5-8b4f-3252852effd0","Type":"ContainerStarted","Data":"f570d031351040cb2fe03dc3851e1c34de085a50efeef7dec9fb4b7808929814"} Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.305633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86c6eb7b-1a53-4a0a-a777-30d62c457740","Type":"ContainerDied","Data":"a69d3dbbde05ffcf816984009b9697aeb4e8ee32716b573bd943e5a755bcc840"} Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.305715 4895 scope.go:117] "RemoveContainer" containerID="acfca2a6603e1a4abe099e36a0382333ac6834dfa0cd24f36c62b4c968c2c843" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.305721 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.328974 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" podStartSLOduration=1.603139129 podStartE2EDuration="9.328946953s" podCreationTimestamp="2025-12-06 07:26:48 +0000 UTC" firstStartedPulling="2025-12-06 07:26:49.084700609 +0000 UTC m=+1771.486089479" lastFinishedPulling="2025-12-06 07:26:56.810508433 +0000 UTC m=+1779.211897303" observedRunningTime="2025-12-06 07:26:57.317536227 +0000 UTC m=+1779.718925097" watchObservedRunningTime="2025-12-06 07:26:57.328946953 +0000 UTC m=+1779.730335823" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.345308 4895 scope.go:117] "RemoveContainer" containerID="a8b091a9ee2f1378e40b1ab92fd5fe11031bed00a979acd5b0c2faef04191c08" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.358554 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.369596 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.380952 4895 scope.go:117] "RemoveContainer" containerID="6a731fb3438a9ed734147491ca30a00e90b25d13261013d5b2e44ceffa056e6f" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383096 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:57 crc kubenswrapper[4895]: E1206 07:26:57.383487 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-notification-agent" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383509 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-notification-agent" Dec 06 07:26:57 crc kubenswrapper[4895]: E1206 07:26:57.383530 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="sg-core" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383538 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="sg-core" Dec 06 07:26:57 crc kubenswrapper[4895]: E1206 07:26:57.383572 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-central-agent" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383579 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-central-agent" Dec 06 07:26:57 crc kubenswrapper[4895]: E1206 07:26:57.383600 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="proxy-httpd" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383607 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="proxy-httpd" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383906 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="proxy-httpd" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383935 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-notification-agent" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.383956 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="sg-core" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.384400 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" containerName="ceilometer-central-agent" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.386317 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.403653 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.403697 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.409219 4895 scope.go:117] "RemoveContainer" containerID="29cabefc071a35c0b9ceaafd902fc62e3bcaf585e54b24edc6f4d0761ca0fa68" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.415691 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446617 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-scripts\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446699 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-log-httpd\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-config-data\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446767 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-run-httpd\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446816 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446851 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvfg\" (UniqueName: \"kubernetes.io/projected/c99b1171-0604-4478-822c-5a8d48ac19f3-kube-api-access-6bvfg\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.446896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.548937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-log-httpd\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.549062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-config-data\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.549102 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-run-httpd\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.549181 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.549235 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bvfg\" (UniqueName: \"kubernetes.io/projected/c99b1171-0604-4478-822c-5a8d48ac19f3-kube-api-access-6bvfg\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.549305 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.549424 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-scripts\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.550741 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-log-httpd\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.550851 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-run-httpd\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.557659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.566278 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.573257 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-scripts\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.579863 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-config-data\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.585435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bvfg\" (UniqueName: \"kubernetes.io/projected/c99b1171-0604-4478-822c-5a8d48ac19f3-kube-api-access-6bvfg\") pod \"ceilometer-0\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " pod="openstack/ceilometer-0" Dec 06 07:26:57 crc kubenswrapper[4895]: I1206 07:26:57.768323 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:26:58 crc kubenswrapper[4895]: I1206 07:26:58.065994 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c6eb7b-1a53-4a0a-a777-30d62c457740" path="/var/lib/kubelet/pods/86c6eb7b-1a53-4a0a-a777-30d62c457740/volumes" Dec 06 07:26:58 crc kubenswrapper[4895]: I1206 07:26:58.307278 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:26:59 crc kubenswrapper[4895]: I1206 07:26:59.378507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerStarted","Data":"0c6d6ffb9d69585f1ab2801d397401bee11daad79a6a1f0f4af67e18c69a321f"} Dec 06 07:26:59 crc kubenswrapper[4895]: I1206 07:26:59.379574 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerStarted","Data":"1fd8dc268e92dbec193bf1d5e9bc988659c5a3ab850d3ef452251667bfdee878"} Dec 06 07:27:00 crc kubenswrapper[4895]: I1206 07:27:00.390330 4895 generic.go:334] "Generic (PLEG): container finished" podID="34b001b3-7a17-444d-8dd9-5e296f84770b" containerID="7322288de69173a46c9c5d01fd459b6bd7190e029716431816a2e04cfcdda2fe" exitCode=0 Dec 06 07:27:00 crc kubenswrapper[4895]: I1206 07:27:00.390448 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kx92p" event={"ID":"34b001b3-7a17-444d-8dd9-5e296f84770b","Type":"ContainerDied","Data":"7322288de69173a46c9c5d01fd459b6bd7190e029716431816a2e04cfcdda2fe"} Dec 06 07:27:00 crc kubenswrapper[4895]: I1206 07:27:00.394122 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerStarted","Data":"67972da996c538e53cbe0e9fcfd03a8b37dc808fd647f57c5aad61c4e1cd181a"} Dec 06 07:27:00 crc kubenswrapper[4895]: I1206 07:27:00.763722 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.164460 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kx92p" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.241848 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-config-data\") pod \"34b001b3-7a17-444d-8dd9-5e296f84770b\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.243162 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-db-sync-config-data\") pod \"34b001b3-7a17-444d-8dd9-5e296f84770b\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.243296 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-combined-ca-bundle\") pod \"34b001b3-7a17-444d-8dd9-5e296f84770b\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.243343 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk8wr\" (UniqueName: \"kubernetes.io/projected/34b001b3-7a17-444d-8dd9-5e296f84770b-kube-api-access-qk8wr\") pod \"34b001b3-7a17-444d-8dd9-5e296f84770b\" (UID: \"34b001b3-7a17-444d-8dd9-5e296f84770b\") " Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.248952 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b001b3-7a17-444d-8dd9-5e296f84770b-kube-api-access-qk8wr" (OuterVolumeSpecName: "kube-api-access-qk8wr") pod "34b001b3-7a17-444d-8dd9-5e296f84770b" (UID: "34b001b3-7a17-444d-8dd9-5e296f84770b"). InnerVolumeSpecName "kube-api-access-qk8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.257466 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "34b001b3-7a17-444d-8dd9-5e296f84770b" (UID: "34b001b3-7a17-444d-8dd9-5e296f84770b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.279050 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34b001b3-7a17-444d-8dd9-5e296f84770b" (UID: "34b001b3-7a17-444d-8dd9-5e296f84770b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.299356 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-config-data" (OuterVolumeSpecName: "config-data") pod "34b001b3-7a17-444d-8dd9-5e296f84770b" (UID: "34b001b3-7a17-444d-8dd9-5e296f84770b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.347086 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.347171 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.347205 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b001b3-7a17-444d-8dd9-5e296f84770b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.347233 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk8wr\" (UniqueName: \"kubernetes.io/projected/34b001b3-7a17-444d-8dd9-5e296f84770b-kube-api-access-qk8wr\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.422146 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerStarted","Data":"c91ea1e35e44cdff3019fc5353d6467a90b4b06b9fa2db2fe0b8a87c56043ab5"} Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.425670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kx92p" event={"ID":"34b001b3-7a17-444d-8dd9-5e296f84770b","Type":"ContainerDied","Data":"b826f7fbb98577a044dac7a394ec25915100d01f3b9b2e29f638ef1ce769444c"} Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.425709 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b826f7fbb98577a044dac7a394ec25915100d01f3b9b2e29f638ef1ce769444c" Dec 06 07:27:02 crc kubenswrapper[4895]: I1206 07:27:02.425764 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kx92p" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.007966 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7458fc9bff-s5gq4"] Dec 06 07:27:03 crc kubenswrapper[4895]: E1206 07:27:03.008375 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b001b3-7a17-444d-8dd9-5e296f84770b" containerName="glance-db-sync" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.008392 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b001b3-7a17-444d-8dd9-5e296f84770b" containerName="glance-db-sync" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.008592 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b001b3-7a17-444d-8dd9-5e296f84770b" containerName="glance-db-sync" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.009536 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.033125 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7458fc9bff-s5gq4"] Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.067538 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-config\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.067594 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-sb\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.067681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cql9z\" (UniqueName: \"kubernetes.io/projected/0bfc2662-32ed-4e75-98d8-5fe472cb5052-kube-api-access-cql9z\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.067713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-svc\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.067737 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-swift-storage-0\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.067867 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-nb\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.169751 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-config\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.169802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-sb\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.169923 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cql9z\" (UniqueName: \"kubernetes.io/projected/0bfc2662-32ed-4e75-98d8-5fe472cb5052-kube-api-access-cql9z\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.169952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-svc\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.169973 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-swift-storage-0\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.170084 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-nb\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.171125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-nb\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.172550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-sb\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.173223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-svc\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.173867 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-swift-storage-0\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.174979 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-config\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.191597 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cql9z\" (UniqueName: \"kubernetes.io/projected/0bfc2662-32ed-4e75-98d8-5fe472cb5052-kube-api-access-cql9z\") pod \"dnsmasq-dns-7458fc9bff-s5gq4\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.338803 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:03 crc kubenswrapper[4895]: I1206 07:27:03.914681 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7458fc9bff-s5gq4"] Dec 06 07:27:03 crc kubenswrapper[4895]: W1206 07:27:03.915135 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bfc2662_32ed_4e75_98d8_5fe472cb5052.slice/crio-f4c4ca4fb2f8d1126b9bb2f89ebce1bd9c6df45a02dfd034fcf722f3c124cff2 WatchSource:0}: Error finding container f4c4ca4fb2f8d1126b9bb2f89ebce1bd9c6df45a02dfd034fcf722f3c124cff2: Status 404 returned error can't find the container with id f4c4ca4fb2f8d1126b9bb2f89ebce1bd9c6df45a02dfd034fcf722f3c124cff2 Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.071223 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.075196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.079372 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.079634 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wj8l7" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.079677 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.082566 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.206971 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.207074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.207105 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-scripts\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.207197 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jxn\" (UniqueName: \"kubernetes.io/projected/00c65f59-10aa-4d09-9de7-c41535957358-kube-api-access-z9jxn\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.207216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.207231 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-config-data\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.207246 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-logs\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.258305 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.261441 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.264304 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.285907 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.309782 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jxn\" (UniqueName: \"kubernetes.io/projected/00c65f59-10aa-4d09-9de7-c41535957358-kube-api-access-z9jxn\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.309843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.309903 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-config-data\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.309925 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-logs\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.309985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.310043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.310063 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-scripts\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.310754 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-logs\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.311061 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.311136 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.319615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-scripts\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.322314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.329455 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jxn\" (UniqueName: \"kubernetes.io/projected/00c65f59-10aa-4d09-9de7-c41535957358-kube-api-access-z9jxn\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.331759 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-config-data\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.349415 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.404057 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412245 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412598 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412693 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412732 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.412755 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxqc\" (UniqueName: \"kubernetes.io/projected/fe0e23e4-15a6-4944-9d63-9f4d503318fd-kube-api-access-6sxqc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.459898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" event={"ID":"0bfc2662-32ed-4e75-98d8-5fe472cb5052","Type":"ContainerStarted","Data":"f4c4ca4fb2f8d1126b9bb2f89ebce1bd9c6df45a02dfd034fcf722f3c124cff2"} Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.474700 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerStarted","Data":"b7cee3836c8818bab967eb0ec3d34659eb88c58ccc8fe7684f9b435c78fc9799"} Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.474838 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-central-agent" containerID="cri-o://0c6d6ffb9d69585f1ab2801d397401bee11daad79a6a1f0f4af67e18c69a321f" gracePeriod=30 Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.474857 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="proxy-httpd" containerID="cri-o://b7cee3836c8818bab967eb0ec3d34659eb88c58ccc8fe7684f9b435c78fc9799" gracePeriod=30 Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.474891 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="sg-core" containerID="cri-o://c91ea1e35e44cdff3019fc5353d6467a90b4b06b9fa2db2fe0b8a87c56043ab5" gracePeriod=30 Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.474902 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-notification-agent" containerID="cri-o://67972da996c538e53cbe0e9fcfd03a8b37dc808fd647f57c5aad61c4e1cd181a" gracePeriod=30 Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.474908 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.503703 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.658156179 podStartE2EDuration="7.503681799s" podCreationTimestamp="2025-12-06 07:26:57 +0000 UTC" firstStartedPulling="2025-12-06 07:26:58.327414282 +0000 UTC m=+1780.728803152" lastFinishedPulling="2025-12-06 07:27:03.172939902 +0000 UTC m=+1785.574328772" observedRunningTime="2025-12-06 07:27:04.497927314 +0000 UTC m=+1786.899316184" watchObservedRunningTime="2025-12-06 07:27:04.503681799 +0000 UTC m=+1786.905070679" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514366 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514425 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514575 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514618 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514661 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514682 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.514705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxqc\" (UniqueName: \"kubernetes.io/projected/fe0e23e4-15a6-4944-9d63-9f4d503318fd-kube-api-access-6sxqc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.517157 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.517296 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.517889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.520823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.530185 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.539539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxqc\" (UniqueName: \"kubernetes.io/projected/fe0e23e4-15a6-4944-9d63-9f4d503318fd-kube-api-access-6sxqc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.540309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.571014 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:04 crc kubenswrapper[4895]: I1206 07:27:04.583702 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.037841 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.051107 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:27:05 crc kubenswrapper[4895]: E1206 07:27:05.051375 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:27:05 crc kubenswrapper[4895]: E1206 07:27:05.435174 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc99b1171_0604_4478_822c_5a8d48ac19f3.slice/crio-conmon-c91ea1e35e44cdff3019fc5353d6467a90b4b06b9fa2db2fe0b8a87c56043ab5.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.522152 4895 generic.go:334] "Generic (PLEG): container finished" podID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerID="12ba578dc91e0be4f30c486027c7b59eb3488b7699264c9ec79d472e1fe47671" exitCode=0 Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.522547 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" event={"ID":"0bfc2662-32ed-4e75-98d8-5fe472cb5052","Type":"ContainerDied","Data":"12ba578dc91e0be4f30c486027c7b59eb3488b7699264c9ec79d472e1fe47671"} Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.558289 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.560762 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c65f59-10aa-4d09-9de7-c41535957358","Type":"ContainerStarted","Data":"b5a7bd5596535261889a0d33dc78283d5901ce2abef9716de647b50a6ebe5828"} Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.569057 4895 generic.go:334] "Generic (PLEG): container finished" podID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerID="b7cee3836c8818bab967eb0ec3d34659eb88c58ccc8fe7684f9b435c78fc9799" exitCode=0 Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.569093 4895 generic.go:334] "Generic (PLEG): container finished" podID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerID="c91ea1e35e44cdff3019fc5353d6467a90b4b06b9fa2db2fe0b8a87c56043ab5" exitCode=2 Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.569102 4895 generic.go:334] "Generic (PLEG): container finished" podID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerID="67972da996c538e53cbe0e9fcfd03a8b37dc808fd647f57c5aad61c4e1cd181a" exitCode=0 Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.569124 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerDied","Data":"b7cee3836c8818bab967eb0ec3d34659eb88c58ccc8fe7684f9b435c78fc9799"} Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.569151 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerDied","Data":"c91ea1e35e44cdff3019fc5353d6467a90b4b06b9fa2db2fe0b8a87c56043ab5"} Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.569160 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerDied","Data":"67972da996c538e53cbe0e9fcfd03a8b37dc808fd647f57c5aad61c4e1cd181a"} Dec 06 07:27:05 crc kubenswrapper[4895]: I1206 07:27:05.742992 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:06 crc kubenswrapper[4895]: I1206 07:27:06.598290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c65f59-10aa-4d09-9de7-c41535957358","Type":"ContainerStarted","Data":"822ed7dee08f6aceb15502daba16047d551a459b6696000c1e58b914d389edb4"} Dec 06 07:27:06 crc kubenswrapper[4895]: I1206 07:27:06.609035 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" event={"ID":"0bfc2662-32ed-4e75-98d8-5fe472cb5052","Type":"ContainerStarted","Data":"292daefa98d3df02d99150321d8e332021999a759f4f93641a98fd9843975bb0"} Dec 06 07:27:06 crc kubenswrapper[4895]: I1206 07:27:06.609283 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:06 crc kubenswrapper[4895]: I1206 07:27:06.649188 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe0e23e4-15a6-4944-9d63-9f4d503318fd","Type":"ContainerStarted","Data":"dd897a563437692636b07b150ee20808de134a67062e815a448d0adfae0ce52e"} Dec 06 07:27:06 crc kubenswrapper[4895]: I1206 07:27:06.649596 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe0e23e4-15a6-4944-9d63-9f4d503318fd","Type":"ContainerStarted","Data":"2c9a02e2022d92b915036e8f07855fca58eaeb7819861ca4ba28c0b6c1b71892"} Dec 06 07:27:06 crc kubenswrapper[4895]: I1206 07:27:06.659113 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" podStartSLOduration=4.659084185 podStartE2EDuration="4.659084185s" podCreationTimestamp="2025-12-06 07:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:27:06.632297865 +0000 UTC m=+1789.033686755" watchObservedRunningTime="2025-12-06 07:27:06.659084185 +0000 UTC m=+1789.060473055" Dec 06 07:27:07 crc kubenswrapper[4895]: I1206 07:27:07.067433 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:07 crc kubenswrapper[4895]: I1206 07:27:07.659931 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c65f59-10aa-4d09-9de7-c41535957358","Type":"ContainerStarted","Data":"7b22e1e598d137a8bb281a3bfc48a4ce8aa1b1d8786e8233fec75e235b89476d"} Dec 06 07:27:07 crc kubenswrapper[4895]: I1206 07:27:07.660038 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-log" containerID="cri-o://822ed7dee08f6aceb15502daba16047d551a459b6696000c1e58b914d389edb4" gracePeriod=30 Dec 06 07:27:07 crc kubenswrapper[4895]: I1206 07:27:07.660321 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-httpd" containerID="cri-o://7b22e1e598d137a8bb281a3bfc48a4ce8aa1b1d8786e8233fec75e235b89476d" gracePeriod=30 Dec 06 07:27:07 crc kubenswrapper[4895]: I1206 07:27:07.684443 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.684420336 podStartE2EDuration="4.684420336s" podCreationTimestamp="2025-12-06 07:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:27:07.683299826 +0000 UTC m=+1790.084688696" watchObservedRunningTime="2025-12-06 07:27:07.684420336 +0000 UTC m=+1790.085809206" Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.691142 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-log" containerID="cri-o://dd897a563437692636b07b150ee20808de134a67062e815a448d0adfae0ce52e" gracePeriod=30 Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.691795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe0e23e4-15a6-4944-9d63-9f4d503318fd","Type":"ContainerStarted","Data":"bf46e90e64e06225096f27546a62500980cc1d5f0089f72321bed2105e65dc2a"} Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.691882 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-httpd" containerID="cri-o://bf46e90e64e06225096f27546a62500980cc1d5f0089f72321bed2105e65dc2a" gracePeriod=30 Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.702951 4895 generic.go:334] "Generic (PLEG): container finished" podID="00c65f59-10aa-4d09-9de7-c41535957358" containerID="7b22e1e598d137a8bb281a3bfc48a4ce8aa1b1d8786e8233fec75e235b89476d" exitCode=0 Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.708079 4895 generic.go:334] "Generic (PLEG): container finished" podID="00c65f59-10aa-4d09-9de7-c41535957358" containerID="822ed7dee08f6aceb15502daba16047d551a459b6696000c1e58b914d389edb4" exitCode=143 Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.703002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c65f59-10aa-4d09-9de7-c41535957358","Type":"ContainerDied","Data":"7b22e1e598d137a8bb281a3bfc48a4ce8aa1b1d8786e8233fec75e235b89476d"} Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.708154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c65f59-10aa-4d09-9de7-c41535957358","Type":"ContainerDied","Data":"822ed7dee08f6aceb15502daba16047d551a459b6696000c1e58b914d389edb4"} Dec 06 07:27:10 crc kubenswrapper[4895]: I1206 07:27:10.754492 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.754447438 podStartE2EDuration="7.754447438s" podCreationTimestamp="2025-12-06 07:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:27:10.745328383 +0000 UTC m=+1793.146717253" watchObservedRunningTime="2025-12-06 07:27:10.754447438 +0000 UTC m=+1793.155836308" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.523967 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611113 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-httpd-run\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611217 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9jxn\" (UniqueName: \"kubernetes.io/projected/00c65f59-10aa-4d09-9de7-c41535957358-kube-api-access-z9jxn\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-combined-ca-bundle\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611318 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611349 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-scripts\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611379 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-logs\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.611583 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-config-data\") pod \"00c65f59-10aa-4d09-9de7-c41535957358\" (UID: \"00c65f59-10aa-4d09-9de7-c41535957358\") " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.613272 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.615974 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-logs" (OuterVolumeSpecName: "logs") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.625826 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-scripts" (OuterVolumeSpecName: "scripts") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.640781 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.643278 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c65f59-10aa-4d09-9de7-c41535957358-kube-api-access-z9jxn" (OuterVolumeSpecName: "kube-api-access-z9jxn") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "kube-api-access-z9jxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.658177 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.682783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-config-data" (OuterVolumeSpecName: "config-data") pod "00c65f59-10aa-4d09-9de7-c41535957358" (UID: "00c65f59-10aa-4d09-9de7-c41535957358"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714202 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714236 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9jxn\" (UniqueName: \"kubernetes.io/projected/00c65f59-10aa-4d09-9de7-c41535957358-kube-api-access-z9jxn\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714263 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714299 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714310 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714336 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c65f59-10aa-4d09-9de7-c41535957358-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.714345 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c65f59-10aa-4d09-9de7-c41535957358-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.725382 4895 generic.go:334] "Generic (PLEG): container finished" podID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerID="bf46e90e64e06225096f27546a62500980cc1d5f0089f72321bed2105e65dc2a" exitCode=143 Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.725440 4895 generic.go:334] "Generic (PLEG): container finished" podID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerID="dd897a563437692636b07b150ee20808de134a67062e815a448d0adfae0ce52e" exitCode=143 Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.725429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe0e23e4-15a6-4944-9d63-9f4d503318fd","Type":"ContainerDied","Data":"bf46e90e64e06225096f27546a62500980cc1d5f0089f72321bed2105e65dc2a"} Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.725526 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe0e23e4-15a6-4944-9d63-9f4d503318fd","Type":"ContainerDied","Data":"dd897a563437692636b07b150ee20808de134a67062e815a448d0adfae0ce52e"} Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.728645 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c65f59-10aa-4d09-9de7-c41535957358","Type":"ContainerDied","Data":"b5a7bd5596535261889a0d33dc78283d5901ce2abef9716de647b50a6ebe5828"} Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.728686 4895 scope.go:117] "RemoveContainer" containerID="7b22e1e598d137a8bb281a3bfc48a4ce8aa1b1d8786e8233fec75e235b89476d" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.728799 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.740605 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.767404 4895 scope.go:117] "RemoveContainer" containerID="822ed7dee08f6aceb15502daba16047d551a459b6696000c1e58b914d389edb4" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.800317 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.819210 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.828272 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.844833 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:11 crc kubenswrapper[4895]: E1206 07:27:11.846545 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-httpd" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.846607 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-httpd" Dec 06 07:27:11 crc kubenswrapper[4895]: E1206 07:27:11.846673 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-log" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.846711 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-log" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.847288 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-httpd" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.847355 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c65f59-10aa-4d09-9de7-c41535957358" containerName="glance-log" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.850092 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.854871 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.854993 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.871073 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.921552 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-logs\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.922305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.922698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnpj\" (UniqueName: \"kubernetes.io/projected/fa5d7561-2042-4dcc-8ddc-336475230720-kube-api-access-kgnpj\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.923127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.923418 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.923639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.923812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:11 crc kubenswrapper[4895]: I1206 07:27:11.924007 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.028085 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.029040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.029180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.029369 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.030192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-logs\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.030553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.030845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnpj\" (UniqueName: \"kubernetes.io/projected/fa5d7561-2042-4dcc-8ddc-336475230720-kube-api-access-kgnpj\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.030977 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.032060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.032445 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-logs\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.033137 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.039847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.039934 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.040512 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.056635 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.064367 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnpj\" (UniqueName: \"kubernetes.io/projected/fa5d7561-2042-4dcc-8ddc-336475230720-kube-api-access-kgnpj\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.085940 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c65f59-10aa-4d09-9de7-c41535957358" path="/var/lib/kubelet/pods/00c65f59-10aa-4d09-9de7-c41535957358/volumes" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.095121 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.228575 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.592054 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.647756 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.647853 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxqc\" (UniqueName: \"kubernetes.io/projected/fe0e23e4-15a6-4944-9d63-9f4d503318fd-kube-api-access-6sxqc\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.647909 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-httpd-run\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.648115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-config-data\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.648239 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-scripts\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.648311 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-logs\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.648371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-combined-ca-bundle\") pod \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\" (UID: \"fe0e23e4-15a6-4944-9d63-9f4d503318fd\") " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.649838 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.650311 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-logs" (OuterVolumeSpecName: "logs") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.657378 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0e23e4-15a6-4944-9d63-9f4d503318fd-kube-api-access-6sxqc" (OuterVolumeSpecName: "kube-api-access-6sxqc") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "kube-api-access-6sxqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.657770 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-scripts" (OuterVolumeSpecName: "scripts") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.663185 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.707806 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.742419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-config-data" (OuterVolumeSpecName: "config-data") pod "fe0e23e4-15a6-4944-9d63-9f4d503318fd" (UID: "fe0e23e4-15a6-4944-9d63-9f4d503318fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.748881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe0e23e4-15a6-4944-9d63-9f4d503318fd","Type":"ContainerDied","Data":"2c9a02e2022d92b915036e8f07855fca58eaeb7819861ca4ba28c0b6c1b71892"} Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.748938 4895 scope.go:117] "RemoveContainer" containerID="bf46e90e64e06225096f27546a62500980cc1d5f0089f72321bed2105e65dc2a" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.749069 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756362 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756391 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756402 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756428 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756440 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxqc\" (UniqueName: \"kubernetes.io/projected/fe0e23e4-15a6-4944-9d63-9f4d503318fd-kube-api-access-6sxqc\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756449 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe0e23e4-15a6-4944-9d63-9f4d503318fd-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.756460 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0e23e4-15a6-4944-9d63-9f4d503318fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.798781 4895 scope.go:117] "RemoveContainer" containerID="dd897a563437692636b07b150ee20808de134a67062e815a448d0adfae0ce52e" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.802864 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.821656 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.855748 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.858441 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.917635 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:12 crc kubenswrapper[4895]: E1206 07:27:12.918186 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-httpd" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.918204 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-httpd" Dec 06 07:27:12 crc kubenswrapper[4895]: E1206 07:27:12.918224 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-log" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.918232 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-log" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.918482 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-log" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.918516 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" containerName="glance-httpd" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.919882 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.928687 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.929171 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.959175 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:12 crc kubenswrapper[4895]: I1206 07:27:12.995134 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:12 crc kubenswrapper[4895]: E1206 07:27:12.996329 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-d7l5p logs scripts], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-d7l5p logs scripts]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="3c608c20-9de6-401a-a339-815a03b30231" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.056877 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.062490 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.063962 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.064296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.064543 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.064757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-logs\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.065034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.065218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7l5p\" (UniqueName: \"kubernetes.io/projected/3c608c20-9de6-401a-a339-815a03b30231-kube-api-access-d7l5p\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.065254 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.167773 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.167873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.167924 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.167973 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.168027 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-logs\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.168059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.168111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7l5p\" (UniqueName: \"kubernetes.io/projected/3c608c20-9de6-401a-a339-815a03b30231-kube-api-access-d7l5p\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.168145 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.168753 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.169146 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-logs\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.169227 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.180329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.180374 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.180734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.181730 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.193923 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7l5p\" (UniqueName: \"kubernetes.io/projected/3c608c20-9de6-401a-a339-815a03b30231-kube-api-access-d7l5p\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.242775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.341810 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.437293 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8495c879d5-xlttw"] Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.437903 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="dnsmasq-dns" containerID="cri-o://82bdfb3f961fdffe26381a5f1e2adb73bd2436e9138bcd3e2b8d88b1ed8a8c66" gracePeriod=10 Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.834100 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa5d7561-2042-4dcc-8ddc-336475230720","Type":"ContainerStarted","Data":"e4b017b40cb9870c53cbc8882248226d312f74744a71419cefab7226316f489d"} Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.868767 4895 generic.go:334] "Generic (PLEG): container finished" podID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerID="82bdfb3f961fdffe26381a5f1e2adb73bd2436e9138bcd3e2b8d88b1ed8a8c66" exitCode=0 Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.868869 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.869903 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" event={"ID":"494c9693-fb7c-468c-8a34-a6fcfbd35fd7","Type":"ContainerDied","Data":"82bdfb3f961fdffe26381a5f1e2adb73bd2436e9138bcd3e2b8d88b1ed8a8c66"} Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.906361 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.992642 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-scripts\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.992802 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-combined-ca-bundle\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.992831 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-httpd-run\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.992846 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-internal-tls-certs\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.992868 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-config-data\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.992909 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-logs\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.993034 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:13 crc kubenswrapper[4895]: I1206 07:27:13.993068 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7l5p\" (UniqueName: \"kubernetes.io/projected/3c608c20-9de6-401a-a339-815a03b30231-kube-api-access-d7l5p\") pod \"3c608c20-9de6-401a-a339-815a03b30231\" (UID: \"3c608c20-9de6-401a-a339-815a03b30231\") " Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.000048 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.001893 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-logs" (OuterVolumeSpecName: "logs") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.002088 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-scripts" (OuterVolumeSpecName: "scripts") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.010833 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c608c20-9de6-401a-a339-815a03b30231-kube-api-access-d7l5p" (OuterVolumeSpecName: "kube-api-access-d7l5p") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "kube-api-access-d7l5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.016227 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.025937 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-config-data" (OuterVolumeSpecName: "config-data") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.027326 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.036011 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "3c608c20-9de6-401a-a339-815a03b30231" (UID: "3c608c20-9de6-401a-a339-815a03b30231"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.067922 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0e23e4-15a6-4944-9d63-9f4d503318fd" path="/var/lib/kubelet/pods/fe0e23e4-15a6-4944-9d63-9f4d503318fd/volumes" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095728 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7l5p\" (UniqueName: \"kubernetes.io/projected/3c608c20-9de6-401a-a339-815a03b30231-kube-api-access-d7l5p\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095778 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095787 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095796 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095805 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095813 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c608c20-9de6-401a-a339-815a03b30231-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095822 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c608c20-9de6-401a-a339-815a03b30231-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.095843 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.137826 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.197680 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.879747 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.947610 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.957999 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.974977 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:14 crc kubenswrapper[4895]: I1206 07:27:14.977007 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:14 crc kubenswrapper[4895]: W1206 07:27:14.994870 4895 reflector.go:561] object-"openstack"/"cert-glance-default-internal-svc": failed to list *v1.Secret: secrets "cert-glance-default-internal-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 06 07:27:15 crc kubenswrapper[4895]: W1206 07:27:14.994919 4895 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: secrets "glance-default-internal-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 06 07:27:15 crc kubenswrapper[4895]: E1206 07:27:14.997206 4895 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-default-internal-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 07:27:15 crc kubenswrapper[4895]: E1206 07:27:14.999844 4895 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/glance-glance-default-internal-api-0: failed to fetch PVC from API server: persistentvolumeclaims \"glance-glance-default-internal-api-0\" is forbidden: User \"system:node:crc\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" pod="openstack/glance-default-internal-api-0" volumeName="glance" Dec 06 07:27:15 crc kubenswrapper[4895]: E1206 07:27:15.001970 4895 kubelet.go:1946] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-5wjvg logs scripts], unattached volumes=[], failed to process volumes=[glance]: error processing PVC openstack/glance-glance-default-internal-api-0: failed to fetch PVC from API server: persistentvolumeclaims \"glance-glance-default-internal-api-0\" is forbidden: User \"system:node:crc\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: E1206 07:27:15.002007 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-5wjvg logs scripts], unattached volumes=[], failed to process volumes=[glance]: error processing PVC openstack/glance-glance-default-internal-api-0: failed to fetch PVC from API server: persistentvolumeclaims \"glance-glance-default-internal-api-0\" is forbidden: User \"system:node:crc\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" pod="openstack/glance-default-internal-api-0" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015091 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015156 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015174 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015194 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015225 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.015325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjvg\" (UniqueName: \"kubernetes.io/projected/d5739e86-0fb8-4368-91ae-f2a09bb9848c-kube-api-access-5wjvg\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: E1206 07:27:14.996849 4895 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-glance-default-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-glance-default-internal-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.082889 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wjvg\" (UniqueName: \"kubernetes.io/projected/d5739e86-0fb8-4368-91ae-f2a09bb9848c-kube-api-access-5wjvg\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117451 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117540 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117586 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117607 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117627 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.117660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.118352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.127569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.144288 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.144799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.156060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wjvg\" (UniqueName: \"kubernetes.io/projected/d5739e86-0fb8-4368-91ae-f2a09bb9848c-kube-api-access-5wjvg\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.222516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.325108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.326024 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.367054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:15 crc kubenswrapper[4895]: I1206 07:27:15.890055 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:16 crc kubenswrapper[4895]: E1206 07:27:16.129609 4895 secret.go:188] Couldn't get secret openstack/cert-glance-default-internal-svc: failed to sync secret cache: timed out waiting for the condition Dec 06 07:27:16 crc kubenswrapper[4895]: E1206 07:27:16.129717 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs podName:d5739e86-0fb8-4368-91ae-f2a09bb9848c nodeName:}" failed. No retries permitted until 2025-12-06 07:27:16.629693152 +0000 UTC m=+1799.031082022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs") pod "glance-default-internal-api-0" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c") : failed to sync secret cache: timed out waiting for the condition Dec 06 07:27:16 crc kubenswrapper[4895]: E1206 07:27:16.129608 4895 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: failed to sync secret cache: timed out waiting for the condition Dec 06 07:27:16 crc kubenswrapper[4895]: E1206 07:27:16.129880 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data podName:d5739e86-0fb8-4368-91ae-f2a09bb9848c nodeName:}" failed. No retries permitted until 2025-12-06 07:27:16.629843746 +0000 UTC m=+1799.031232616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data") pod "glance-default-internal-api-0" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c") : failed to sync secret cache: timed out waiting for the condition Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.178586 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.506486 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.659702 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.659790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.665762 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.669694 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " pod="openstack/glance-default-internal-api-0" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.791320 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:16 crc kubenswrapper[4895]: I1206 07:27:16.998872 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c608c20-9de6-401a-a339-815a03b30231" path="/var/lib/kubelet/pods/3c608c20-9de6-401a-a339-815a03b30231/volumes" Dec 06 07:27:17 crc kubenswrapper[4895]: I1206 07:27:17.104828 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Dec 06 07:27:17 crc kubenswrapper[4895]: W1206 07:27:17.642583 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5739e86_0fb8_4368_91ae_f2a09bb9848c.slice/crio-45496466bd5c7bda08224530009d560688965989a55b34509e9183973ed62f6a WatchSource:0}: Error finding container 45496466bd5c7bda08224530009d560688965989a55b34509e9183973ed62f6a: Status 404 returned error can't find the container with id 45496466bd5c7bda08224530009d560688965989a55b34509e9183973ed62f6a Dec 06 07:27:17 crc kubenswrapper[4895]: I1206 07:27:17.643023 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:27:17 crc kubenswrapper[4895]: I1206 07:27:17.913119 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5739e86-0fb8-4368-91ae-f2a09bb9848c","Type":"ContainerStarted","Data":"45496466bd5c7bda08224530009d560688965989a55b34509e9183973ed62f6a"} Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.051926 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:27:20 crc kubenswrapper[4895]: E1206 07:27:20.053916 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.909936 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.960868 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5739e86-0fb8-4368-91ae-f2a09bb9848c","Type":"ContainerStarted","Data":"c0d8057c614bf57165265f6274705c1b72d6138ba82d542917a4bf68a493d896"} Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.963902 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa5d7561-2042-4dcc-8ddc-336475230720","Type":"ContainerStarted","Data":"396c5517a5377de34e58194ec2e688e2eb5546de17a7216da1f043b7e210861c"} Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.966247 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" event={"ID":"494c9693-fb7c-468c-8a34-a6fcfbd35fd7","Type":"ContainerDied","Data":"4d8fc2b37c87225ce7a9941c3d5f63a613e13800fb1f6b6ee019ce06da3a610e"} Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.966288 4895 scope.go:117] "RemoveContainer" containerID="82bdfb3f961fdffe26381a5f1e2adb73bd2436e9138bcd3e2b8d88b1ed8a8c66" Dec 06 07:27:20 crc kubenswrapper[4895]: I1206 07:27:20.966453 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495c879d5-xlttw" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.016686 4895 scope.go:117] "RemoveContainer" containerID="c465e1d35ba88deeabcbb44d314e3d968ac137c77b3b91f2c1758a076ac60ad2" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.057999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-nb\") pod \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.058205 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-sb\") pod \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.058353 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-config\") pod \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.058416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kvqc\" (UniqueName: \"kubernetes.io/projected/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-kube-api-access-9kvqc\") pod \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.058501 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-swift-storage-0\") pod \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.058549 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-svc\") pod \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\" (UID: \"494c9693-fb7c-468c-8a34-a6fcfbd35fd7\") " Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.064627 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-kube-api-access-9kvqc" (OuterVolumeSpecName: "kube-api-access-9kvqc") pod "494c9693-fb7c-468c-8a34-a6fcfbd35fd7" (UID: "494c9693-fb7c-468c-8a34-a6fcfbd35fd7"). InnerVolumeSpecName "kube-api-access-9kvqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.120557 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-config" (OuterVolumeSpecName: "config") pod "494c9693-fb7c-468c-8a34-a6fcfbd35fd7" (UID: "494c9693-fb7c-468c-8a34-a6fcfbd35fd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.126854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "494c9693-fb7c-468c-8a34-a6fcfbd35fd7" (UID: "494c9693-fb7c-468c-8a34-a6fcfbd35fd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.131181 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "494c9693-fb7c-468c-8a34-a6fcfbd35fd7" (UID: "494c9693-fb7c-468c-8a34-a6fcfbd35fd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.139380 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "494c9693-fb7c-468c-8a34-a6fcfbd35fd7" (UID: "494c9693-fb7c-468c-8a34-a6fcfbd35fd7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.147096 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "494c9693-fb7c-468c-8a34-a6fcfbd35fd7" (UID: "494c9693-fb7c-468c-8a34-a6fcfbd35fd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.160854 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.160890 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.160899 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.160909 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.160917 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.160925 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kvqc\" (UniqueName: \"kubernetes.io/projected/494c9693-fb7c-468c-8a34-a6fcfbd35fd7-kube-api-access-9kvqc\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.345205 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8495c879d5-xlttw"] Dec 06 07:27:21 crc kubenswrapper[4895]: I1206 07:27:21.360938 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8495c879d5-xlttw"] Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.009917 4895 generic.go:334] "Generic (PLEG): container finished" podID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerID="0c6d6ffb9d69585f1ab2801d397401bee11daad79a6a1f0f4af67e18c69a321f" exitCode=0 Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.010027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerDied","Data":"0c6d6ffb9d69585f1ab2801d397401bee11daad79a6a1f0f4af67e18c69a321f"} Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.010463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99b1171-0604-4478-822c-5a8d48ac19f3","Type":"ContainerDied","Data":"1fd8dc268e92dbec193bf1d5e9bc988659c5a3ab850d3ef452251667bfdee878"} Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.010502 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd8dc268e92dbec193bf1d5e9bc988659c5a3ab850d3ef452251667bfdee878" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.021109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa5d7561-2042-4dcc-8ddc-336475230720","Type":"ContainerStarted","Data":"5f86f88a0048b09a72677b652eb94fd21ab7d1447989850d3c2a784d667b1b12"} Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.066432 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.066401782 podStartE2EDuration="11.066401782s" podCreationTimestamp="2025-12-06 07:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:27:22.053517406 +0000 UTC m=+1804.454906286" watchObservedRunningTime="2025-12-06 07:27:22.066401782 +0000 UTC m=+1804.467790652" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.067797 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" path="/var/lib/kubelet/pods/494c9693-fb7c-468c-8a34-a6fcfbd35fd7/volumes" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.087857 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.192511 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bvfg\" (UniqueName: \"kubernetes.io/projected/c99b1171-0604-4478-822c-5a8d48ac19f3-kube-api-access-6bvfg\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.193518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-sg-core-conf-yaml\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.197626 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-scripts\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.197702 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-log-httpd\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.197919 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-config-data\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.197970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-combined-ca-bundle\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.198005 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-run-httpd\") pod \"c99b1171-0604-4478-822c-5a8d48ac19f3\" (UID: \"c99b1171-0604-4478-822c-5a8d48ac19f3\") " Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.200948 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.201533 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99b1171-0604-4478-822c-5a8d48ac19f3-kube-api-access-6bvfg" (OuterVolumeSpecName: "kube-api-access-6bvfg") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "kube-api-access-6bvfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.201720 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.205757 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-scripts" (OuterVolumeSpecName: "scripts") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.230645 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.230719 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.243083 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.289728 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.306539 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bvfg\" (UniqueName: \"kubernetes.io/projected/c99b1171-0604-4478-822c-5a8d48ac19f3-kube-api-access-6bvfg\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.306610 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.306626 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.306665 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.306680 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99b1171-0604-4478-822c-5a8d48ac19f3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.311081 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.351513 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-config-data" (OuterVolumeSpecName: "config-data") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.368458 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c99b1171-0604-4478-822c-5a8d48ac19f3" (UID: "c99b1171-0604-4478-822c-5a8d48ac19f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.418707 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4895]: I1206 07:27:22.418762 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99b1171-0604-4478-822c-5a8d48ac19f3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.048456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5739e86-0fb8-4368-91ae-f2a09bb9848c","Type":"ContainerStarted","Data":"5c1298d7ec7ec06c2816c3c5d51d11ddd9f42ecf5beced52e8e430776e865dc9"} Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.048534 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.049670 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.049705 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.115720 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.115694636 podStartE2EDuration="9.115694636s" podCreationTimestamp="2025-12-06 07:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:27:23.088168196 +0000 UTC m=+1805.489557066" watchObservedRunningTime="2025-12-06 07:27:23.115694636 +0000 UTC m=+1805.517083506" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.117423 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.129882 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.173050 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:23 crc kubenswrapper[4895]: E1206 07:27:23.176507 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="proxy-httpd" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.176564 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="proxy-httpd" Dec 06 07:27:23 crc kubenswrapper[4895]: E1206 07:27:23.176603 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-notification-agent" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.176614 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-notification-agent" Dec 06 07:27:23 crc kubenswrapper[4895]: E1206 07:27:23.176644 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="dnsmasq-dns" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.176774 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="dnsmasq-dns" Dec 06 07:27:23 crc kubenswrapper[4895]: E1206 07:27:23.176795 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-central-agent" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.176804 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-central-agent" Dec 06 07:27:23 crc kubenswrapper[4895]: E1206 07:27:23.176844 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="sg-core" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.176852 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="sg-core" Dec 06 07:27:23 crc kubenswrapper[4895]: E1206 07:27:23.176879 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="init" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.176888 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="init" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.181436 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-central-agent" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.181521 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="sg-core" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.181572 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="proxy-httpd" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.181598 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" containerName="ceilometer-notification-agent" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.181619 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="494c9693-fb7c-468c-8a34-a6fcfbd35fd7" containerName="dnsmasq-dns" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.193415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.214671 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.225362 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.239778 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.240603 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.240861 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.241019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-config-data\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.241211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.241443 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8ww\" (UniqueName: \"kubernetes.io/projected/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-kube-api-access-5r8ww\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.241617 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-scripts\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.243174 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.344262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.344754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-config-data\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.344838 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.344894 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8ww\" (UniqueName: \"kubernetes.io/projected/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-kube-api-access-5r8ww\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.344940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-scripts\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.345085 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.345132 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.345747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.346697 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.352127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-scripts\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.354399 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.354938 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-config-data\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.356218 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.367007 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8ww\" (UniqueName: \"kubernetes.io/projected/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-kube-api-access-5r8ww\") pod \"ceilometer-0\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " pod="openstack/ceilometer-0" Dec 06 07:27:23 crc kubenswrapper[4895]: I1206 07:27:23.540433 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:27:24 crc kubenswrapper[4895]: I1206 07:27:24.065158 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99b1171-0604-4478-822c-5a8d48ac19f3" path="/var/lib/kubelet/pods/c99b1171-0604-4478-822c-5a8d48ac19f3/volumes" Dec 06 07:27:24 crc kubenswrapper[4895]: I1206 07:27:24.066261 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:25 crc kubenswrapper[4895]: I1206 07:27:25.071924 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerStarted","Data":"4f550832062a783b49499587df93cfcfdbf20dafa3a650fdb0ed981614335cee"} Dec 06 07:27:26 crc kubenswrapper[4895]: I1206 07:27:26.791799 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:26 crc kubenswrapper[4895]: I1206 07:27:26.792159 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:26 crc kubenswrapper[4895]: I1206 07:27:26.831533 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:26 crc kubenswrapper[4895]: I1206 07:27:26.843212 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:27 crc kubenswrapper[4895]: I1206 07:27:27.097588 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerStarted","Data":"afea4a6234b744992805b16535e2aa0049a2d00080994f9444646244b8f612db"} Dec 06 07:27:27 crc kubenswrapper[4895]: I1206 07:27:27.097665 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:27 crc kubenswrapper[4895]: I1206 07:27:27.097962 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:27 crc kubenswrapper[4895]: I1206 07:27:27.285943 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 07:27:27 crc kubenswrapper[4895]: I1206 07:27:27.524220 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:27:29 crc kubenswrapper[4895]: I1206 07:27:29.119431 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 07:27:29 crc kubenswrapper[4895]: I1206 07:27:29.120007 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 07:27:29 crc kubenswrapper[4895]: I1206 07:27:29.121009 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerStarted","Data":"e479e06d9c6cad8d5a62143baa87c4098959ddcb18dba999a1d0faca7231bf79"} Dec 06 07:27:29 crc kubenswrapper[4895]: I1206 07:27:29.855578 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:29 crc kubenswrapper[4895]: I1206 07:27:29.869432 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 07:27:30 crc kubenswrapper[4895]: I1206 07:27:30.039643 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 07:27:32 crc kubenswrapper[4895]: I1206 07:27:32.051263 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:27:32 crc kubenswrapper[4895]: E1206 07:27:32.052047 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:27:38 crc kubenswrapper[4895]: I1206 07:27:38.225744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerStarted","Data":"55129c01d5d6b9e4c7016dbeaf3ce1b344c9bf582c49ac72f22e0750c3583e36"} Dec 06 07:27:43 crc kubenswrapper[4895]: I1206 07:27:43.051659 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:27:43 crc kubenswrapper[4895]: E1206 07:27:43.053281 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:27:43 crc kubenswrapper[4895]: I1206 07:27:43.283215 4895 generic.go:334] "Generic (PLEG): container finished" podID="85a8313b-5768-450d-bf40-3a3197e9b03f" containerID="43dc6067180e3f65623c69b4994dd075ce8e1c72869263fa6d55a6b7dce89050" exitCode=0 Dec 06 07:27:43 crc kubenswrapper[4895]: I1206 07:27:43.283326 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4cfbl" event={"ID":"85a8313b-5768-450d-bf40-3a3197e9b03f","Type":"ContainerDied","Data":"43dc6067180e3f65623c69b4994dd075ce8e1c72869263fa6d55a6b7dce89050"} Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.300542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerStarted","Data":"b4e09a18305422e32a092769b694c40d4dde03370136a1f9a8c591aec3cea2d6"} Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.680018 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.802748 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-config\") pod \"85a8313b-5768-450d-bf40-3a3197e9b03f\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.803296 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-combined-ca-bundle\") pod \"85a8313b-5768-450d-bf40-3a3197e9b03f\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.803462 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rmqb\" (UniqueName: \"kubernetes.io/projected/85a8313b-5768-450d-bf40-3a3197e9b03f-kube-api-access-8rmqb\") pod \"85a8313b-5768-450d-bf40-3a3197e9b03f\" (UID: \"85a8313b-5768-450d-bf40-3a3197e9b03f\") " Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.809795 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a8313b-5768-450d-bf40-3a3197e9b03f-kube-api-access-8rmqb" (OuterVolumeSpecName: "kube-api-access-8rmqb") pod "85a8313b-5768-450d-bf40-3a3197e9b03f" (UID: "85a8313b-5768-450d-bf40-3a3197e9b03f"). InnerVolumeSpecName "kube-api-access-8rmqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.832423 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-config" (OuterVolumeSpecName: "config") pod "85a8313b-5768-450d-bf40-3a3197e9b03f" (UID: "85a8313b-5768-450d-bf40-3a3197e9b03f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.837496 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85a8313b-5768-450d-bf40-3a3197e9b03f" (UID: "85a8313b-5768-450d-bf40-3a3197e9b03f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.905165 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rmqb\" (UniqueName: \"kubernetes.io/projected/85a8313b-5768-450d-bf40-3a3197e9b03f-kube-api-access-8rmqb\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.905203 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:44 crc kubenswrapper[4895]: I1206 07:27:44.905216 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a8313b-5768-450d-bf40-3a3197e9b03f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:45 crc kubenswrapper[4895]: I1206 07:27:45.315748 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4cfbl" event={"ID":"85a8313b-5768-450d-bf40-3a3197e9b03f","Type":"ContainerDied","Data":"654fc6c9c58e364e1bdc58968671032749fe4b925e9226fddbd2c491d84b8d5c"} Dec 06 07:27:45 crc kubenswrapper[4895]: I1206 07:27:45.315787 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="654fc6c9c58e364e1bdc58968671032749fe4b925e9226fddbd2c491d84b8d5c" Dec 06 07:27:45 crc kubenswrapper[4895]: I1206 07:27:45.315808 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4cfbl" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.326068 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-central-agent" containerID="cri-o://afea4a6234b744992805b16535e2aa0049a2d00080994f9444646244b8f612db" gracePeriod=30 Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.326193 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="proxy-httpd" containerID="cri-o://b4e09a18305422e32a092769b694c40d4dde03370136a1f9a8c591aec3cea2d6" gracePeriod=30 Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.326274 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.326278 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="sg-core" containerID="cri-o://55129c01d5d6b9e4c7016dbeaf3ce1b344c9bf582c49ac72f22e0750c3583e36" gracePeriod=30 Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.326306 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-notification-agent" containerID="cri-o://e479e06d9c6cad8d5a62143baa87c4098959ddcb18dba999a1d0faca7231bf79" gracePeriod=30 Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.352248 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.939463598 podStartE2EDuration="23.352211097s" podCreationTimestamp="2025-12-06 07:27:23 +0000 UTC" firstStartedPulling="2025-12-06 07:27:24.071641363 +0000 UTC m=+1806.473030233" lastFinishedPulling="2025-12-06 07:27:43.484388862 +0000 UTC m=+1825.885777732" observedRunningTime="2025-12-06 07:27:46.346158186 +0000 UTC m=+1828.747547136" watchObservedRunningTime="2025-12-06 07:27:46.352211097 +0000 UTC m=+1828.753600007" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.742830 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-cvzmv"] Dec 06 07:27:46 crc kubenswrapper[4895]: E1206 07:27:46.743196 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a8313b-5768-450d-bf40-3a3197e9b03f" containerName="neutron-db-sync" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.743213 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a8313b-5768-450d-bf40-3a3197e9b03f" containerName="neutron-db-sync" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.743398 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a8313b-5768-450d-bf40-3a3197e9b03f" containerName="neutron-db-sync" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.744310 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.764517 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-cvzmv"] Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.820744 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66cfcbd96d-5bdvp"] Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.823776 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.825883 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.826121 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sb28w" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.827289 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.827612 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.829435 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66cfcbd96d-5bdvp"] Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.851463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzfr\" (UniqueName: \"kubernetes.io/projected/61880001-a6c9-4c2f-80ea-27a053575307-kube-api-access-xrzfr\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.851519 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-config\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.851566 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-svc\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.851600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-nb\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.851633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-sb\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.851677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-swift-storage-0\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.954678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-swift-storage-0\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.958481 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-combined-ca-bundle\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.958623 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-ovndb-tls-certs\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.958701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-config\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.958753 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqs4f\" (UniqueName: \"kubernetes.io/projected/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-kube-api-access-nqs4f\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.958900 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzfr\" (UniqueName: \"kubernetes.io/projected/61880001-a6c9-4c2f-80ea-27a053575307-kube-api-access-xrzfr\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.958954 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-config\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.956592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-swift-storage-0\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.959025 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-httpd-config\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.959922 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-svc\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.960032 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-nb\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.960168 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-sb\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.960383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-config\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.961273 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-svc\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.963082 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-nb\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.963447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-sb\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:46 crc kubenswrapper[4895]: I1206 07:27:46.982695 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzfr\" (UniqueName: \"kubernetes.io/projected/61880001-a6c9-4c2f-80ea-27a053575307-kube-api-access-xrzfr\") pod \"dnsmasq-dns-56d54d44c7-cvzmv\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.062935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-combined-ca-bundle\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.062996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-ovndb-tls-certs\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.063017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-config\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.063041 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqs4f\" (UniqueName: \"kubernetes.io/projected/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-kube-api-access-nqs4f\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.063090 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-httpd-config\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.067291 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-httpd-config\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.068911 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-ovndb-tls-certs\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.070162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-combined-ca-bundle\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.072294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-config\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.080988 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.082270 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqs4f\" (UniqueName: \"kubernetes.io/projected/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-kube-api-access-nqs4f\") pod \"neutron-66cfcbd96d-5bdvp\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.190357 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.691392 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-cvzmv"] Dec 06 07:27:47 crc kubenswrapper[4895]: W1206 07:27:47.941507 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fe8c4d3_bd90_49d9_8828_32563e4a1a90.slice/crio-a1f5433698f02ffac347251f9e1b0e2e9dcd0806cb4e6b4abc43fd1a34d797ac WatchSource:0}: Error finding container a1f5433698f02ffac347251f9e1b0e2e9dcd0806cb4e6b4abc43fd1a34d797ac: Status 404 returned error can't find the container with id a1f5433698f02ffac347251f9e1b0e2e9dcd0806cb4e6b4abc43fd1a34d797ac Dec 06 07:27:47 crc kubenswrapper[4895]: I1206 07:27:47.942405 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66cfcbd96d-5bdvp"] Dec 06 07:27:48 crc kubenswrapper[4895]: I1206 07:27:48.354347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cfcbd96d-5bdvp" event={"ID":"8fe8c4d3-bd90-49d9-8828-32563e4a1a90","Type":"ContainerStarted","Data":"a1f5433698f02ffac347251f9e1b0e2e9dcd0806cb4e6b4abc43fd1a34d797ac"} Dec 06 07:27:48 crc kubenswrapper[4895]: I1206 07:27:48.358647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" event={"ID":"61880001-a6c9-4c2f-80ea-27a053575307","Type":"ContainerStarted","Data":"742f687db5ce3566931ad0dff7b3cc18427486f11b4677c107525c91c5276328"} Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.375836 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerID="b4e09a18305422e32a092769b694c40d4dde03370136a1f9a8c591aec3cea2d6" exitCode=0 Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.377276 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerID="55129c01d5d6b9e4c7016dbeaf3ce1b344c9bf582c49ac72f22e0750c3583e36" exitCode=2 Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.375909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerDied","Data":"b4e09a18305422e32a092769b694c40d4dde03370136a1f9a8c591aec3cea2d6"} Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.377582 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerDied","Data":"55129c01d5d6b9e4c7016dbeaf3ce1b344c9bf582c49ac72f22e0750c3583e36"} Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.380407 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cfcbd96d-5bdvp" event={"ID":"8fe8c4d3-bd90-49d9-8828-32563e4a1a90","Type":"ContainerStarted","Data":"4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa"} Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.382213 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" event={"ID":"61880001-a6c9-4c2f-80ea-27a053575307","Type":"ContainerStarted","Data":"1ac08b5b35f146391dc5d3b9cb08957511a73032675c8f9626fcd03d7eb36611"} Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.832023 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-676f67bc8f-srz2n"] Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.833679 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.836624 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.836822 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.855579 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-676f67bc8f-srz2n"] Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.919288 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-config\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.919356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-ovndb-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.919452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-combined-ca-bundle\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.919604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-internal-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.919789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-httpd-config\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.919953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtscj\" (UniqueName: \"kubernetes.io/projected/275e5518-922b-455d-a5d5-7b072a12ab07-kube-api-access-jtscj\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:49 crc kubenswrapper[4895]: I1206 07:27:49.920000 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-public-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022221 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-config\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022266 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-ovndb-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022306 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-combined-ca-bundle\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022324 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-internal-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022365 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-httpd-config\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022407 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtscj\" (UniqueName: \"kubernetes.io/projected/275e5518-922b-455d-a5d5-7b072a12ab07-kube-api-access-jtscj\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.022429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-public-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.030211 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-config\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.030293 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-combined-ca-bundle\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.031309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-ovndb-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.032534 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-httpd-config\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.035264 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-internal-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.040240 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-public-tls-certs\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.044050 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtscj\" (UniqueName: \"kubernetes.io/projected/275e5518-922b-455d-a5d5-7b072a12ab07-kube-api-access-jtscj\") pod \"neutron-676f67bc8f-srz2n\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.152405 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.401919 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerID="afea4a6234b744992805b16535e2aa0049a2d00080994f9444646244b8f612db" exitCode=0 Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.401979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerDied","Data":"afea4a6234b744992805b16535e2aa0049a2d00080994f9444646244b8f612db"} Dec 06 07:27:50 crc kubenswrapper[4895]: I1206 07:27:50.709517 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-676f67bc8f-srz2n"] Dec 06 07:27:51 crc kubenswrapper[4895]: I1206 07:27:51.412267 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-676f67bc8f-srz2n" event={"ID":"275e5518-922b-455d-a5d5-7b072a12ab07","Type":"ContainerStarted","Data":"9cff65a3cb292fe14e6571f1dafdab1f6f8cac1b0731b381f0565469386b2c12"} Dec 06 07:27:54 crc kubenswrapper[4895]: I1206 07:27:54.443458 4895 generic.go:334] "Generic (PLEG): container finished" podID="61880001-a6c9-4c2f-80ea-27a053575307" containerID="1ac08b5b35f146391dc5d3b9cb08957511a73032675c8f9626fcd03d7eb36611" exitCode=0 Dec 06 07:27:54 crc kubenswrapper[4895]: I1206 07:27:54.443520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" event={"ID":"61880001-a6c9-4c2f-80ea-27a053575307","Type":"ContainerDied","Data":"1ac08b5b35f146391dc5d3b9cb08957511a73032675c8f9626fcd03d7eb36611"} Dec 06 07:27:55 crc kubenswrapper[4895]: I1206 07:27:55.051241 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:27:55 crc kubenswrapper[4895]: E1206 07:27:55.052112 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:27:58 crc kubenswrapper[4895]: I1206 07:27:58.150661 4895 generic.go:334] "Generic (PLEG): container finished" podID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerID="e479e06d9c6cad8d5a62143baa87c4098959ddcb18dba999a1d0faca7231bf79" exitCode=0 Dec 06 07:27:58 crc kubenswrapper[4895]: I1206 07:27:58.150748 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerDied","Data":"e479e06d9c6cad8d5a62143baa87c4098959ddcb18dba999a1d0faca7231bf79"} Dec 06 07:27:59 crc kubenswrapper[4895]: I1206 07:27:59.163302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-676f67bc8f-srz2n" event={"ID":"275e5518-922b-455d-a5d5-7b072a12ab07","Type":"ContainerStarted","Data":"c164dcb786933c905e3f3e8351f17e2bb2512e11081c2453a5584c61dbfedabc"} Dec 06 07:28:01 crc kubenswrapper[4895]: I1206 07:28:01.432792 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.169:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.237504 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cfcbd96d-5bdvp" event={"ID":"8fe8c4d3-bd90-49d9-8828-32563e4a1a90","Type":"ContainerStarted","Data":"35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a"} Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.239549 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" event={"ID":"61880001-a6c9-4c2f-80ea-27a053575307","Type":"ContainerStarted","Data":"a8d1e51b3477a3661f3afeff16a4853e93283a38fdadab6a482c5fe596a29048"} Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.474753 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.169:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.691264 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827199 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-sg-core-conf-yaml\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827415 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-scripts\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827459 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-run-httpd\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827583 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r8ww\" (UniqueName: \"kubernetes.io/projected/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-kube-api-access-5r8ww\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-log-httpd\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827716 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-config-data\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.827747 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-combined-ca-bundle\") pod \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\" (UID: \"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140\") " Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.829346 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.829986 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.837243 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-kube-api-access-5r8ww" (OuterVolumeSpecName: "kube-api-access-5r8ww") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "kube-api-access-5r8ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.845608 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-scripts" (OuterVolumeSpecName: "scripts") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.870657 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.914375 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.930853 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.930905 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.930919 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r8ww\" (UniqueName: \"kubernetes.io/projected/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-kube-api-access-5r8ww\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.930929 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.930937 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.930945 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:06 crc kubenswrapper[4895]: I1206 07:28:06.945436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-config-data" (OuterVolumeSpecName: "config-data") pod "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" (UID: "9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.033014 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.252403 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-676f67bc8f-srz2n" event={"ID":"275e5518-922b-455d-a5d5-7b072a12ab07","Type":"ContainerStarted","Data":"4f8e9ae1388bc7994b5365380a4bd5e84d80b90cafe1780718a2555d9c3d7e69"} Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.253778 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.261055 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.261052 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140","Type":"ContainerDied","Data":"4f550832062a783b49499587df93cfcfdbf20dafa3a650fdb0ed981614335cee"} Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.261223 4895 scope.go:117] "RemoveContainer" containerID="b4e09a18305422e32a092769b694c40d4dde03370136a1f9a8c591aec3cea2d6" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.261381 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.261417 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.281057 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-676f67bc8f-srz2n" podStartSLOduration=18.281034039 podStartE2EDuration="18.281034039s" podCreationTimestamp="2025-12-06 07:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:28:07.276277022 +0000 UTC m=+1849.677665882" watchObservedRunningTime="2025-12-06 07:28:07.281034039 +0000 UTC m=+1849.682422909" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.302227 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66cfcbd96d-5bdvp" podStartSLOduration=21.302203571 podStartE2EDuration="21.302203571s" podCreationTimestamp="2025-12-06 07:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:28:07.298779419 +0000 UTC m=+1849.700168289" watchObservedRunningTime="2025-12-06 07:28:07.302203571 +0000 UTC m=+1849.703592441" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.309261 4895 scope.go:117] "RemoveContainer" containerID="55129c01d5d6b9e4c7016dbeaf3ce1b344c9bf582c49ac72f22e0750c3583e36" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.333816 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" podStartSLOduration=21.333798839 podStartE2EDuration="21.333798839s" podCreationTimestamp="2025-12-06 07:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:28:07.329337021 +0000 UTC m=+1849.730725901" watchObservedRunningTime="2025-12-06 07:28:07.333798839 +0000 UTC m=+1849.735187709" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.335505 4895 scope.go:117] "RemoveContainer" containerID="e479e06d9c6cad8d5a62143baa87c4098959ddcb18dba999a1d0faca7231bf79" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.356552 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.367668 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.374115 4895 scope.go:117] "RemoveContainer" containerID="afea4a6234b744992805b16535e2aa0049a2d00080994f9444646244b8f612db" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.382718 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:07 crc kubenswrapper[4895]: E1206 07:28:07.383288 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="proxy-httpd" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.383318 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="proxy-httpd" Dec 06 07:28:07 crc kubenswrapper[4895]: E1206 07:28:07.385246 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="sg-core" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385274 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="sg-core" Dec 06 07:28:07 crc kubenswrapper[4895]: E1206 07:28:07.385294 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-notification-agent" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385304 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-notification-agent" Dec 06 07:28:07 crc kubenswrapper[4895]: E1206 07:28:07.385318 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-central-agent" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385327 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-central-agent" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385718 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="sg-core" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385746 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="proxy-httpd" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385759 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-central-agent" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.385771 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" containerName="ceilometer-notification-agent" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.388073 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.390917 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.392648 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.398777 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.544296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-scripts\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.544732 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-log-httpd\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.545058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.545218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-config-data\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.545356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.545440 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-run-httpd\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.545709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhcb\" (UniqueName: \"kubernetes.io/projected/75154756-b3e3-40c6-9fa3-fa8991c87b83-kube-api-access-ghhcb\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648035 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-scripts\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648155 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-log-httpd\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648206 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648239 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-config-data\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648295 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-run-httpd\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.648354 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhcb\" (UniqueName: \"kubernetes.io/projected/75154756-b3e3-40c6-9fa3-fa8991c87b83-kube-api-access-ghhcb\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.649279 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-log-httpd\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.649641 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-run-httpd\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.653625 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.654334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.654925 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-scripts\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.656221 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-config-data\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.672352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhcb\" (UniqueName: \"kubernetes.io/projected/75154756-b3e3-40c6-9fa3-fa8991c87b83-kube-api-access-ghhcb\") pod \"ceilometer-0\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " pod="openstack/ceilometer-0" Dec 06 07:28:07 crc kubenswrapper[4895]: I1206 07:28:07.718536 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:08 crc kubenswrapper[4895]: I1206 07:28:08.085964 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140" path="/var/lib/kubelet/pods/9b8f6b1c-21c3-426f-9c7e-5c4b2af1a140/volumes" Dec 06 07:28:08 crc kubenswrapper[4895]: W1206 07:28:08.197338 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75154756_b3e3_40c6_9fa3_fa8991c87b83.slice/crio-be8dadd452f5f4f02776b163824f1d7db471619ac17b934288f4dda846d89f54 WatchSource:0}: Error finding container be8dadd452f5f4f02776b163824f1d7db471619ac17b934288f4dda846d89f54: Status 404 returned error can't find the container with id be8dadd452f5f4f02776b163824f1d7db471619ac17b934288f4dda846d89f54 Dec 06 07:28:08 crc kubenswrapper[4895]: I1206 07:28:08.198560 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:08 crc kubenswrapper[4895]: I1206 07:28:08.270933 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerStarted","Data":"be8dadd452f5f4f02776b163824f1d7db471619ac17b934288f4dda846d89f54"} Dec 06 07:28:09 crc kubenswrapper[4895]: I1206 07:28:09.284826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerStarted","Data":"fec9e5561b45801a9e55ec483952de760d6ab0c6ce9f74be5fea5947110fbafb"} Dec 06 07:28:09 crc kubenswrapper[4895]: I1206 07:28:09.639854 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:10 crc kubenswrapper[4895]: I1206 07:28:10.052638 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:28:10 crc kubenswrapper[4895]: E1206 07:28:10.052874 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:28:10 crc kubenswrapper[4895]: I1206 07:28:10.298670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerStarted","Data":"e2dcf06ac510ec9920fde1765e856b2213aaab695eb2835a91815c4c3c1f45a0"} Dec 06 07:28:11 crc kubenswrapper[4895]: I1206 07:28:11.309330 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerStarted","Data":"9e3b3d400b5965412b246ff607dbfb7451d1a9bc8c8ada0e5e883b36eaa13388"} Dec 06 07:28:12 crc kubenswrapper[4895]: I1206 07:28:12.085071 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:28:12 crc kubenswrapper[4895]: I1206 07:28:12.166131 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7458fc9bff-s5gq4"] Dec 06 07:28:12 crc kubenswrapper[4895]: I1206 07:28:12.166464 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="dnsmasq-dns" containerID="cri-o://292daefa98d3df02d99150321d8e332021999a759f4f93641a98fd9843975bb0" gracePeriod=10 Dec 06 07:28:12 crc kubenswrapper[4895]: I1206 07:28:12.323439 4895 generic.go:334] "Generic (PLEG): container finished" podID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerID="292daefa98d3df02d99150321d8e332021999a759f4f93641a98fd9843975bb0" exitCode=0 Dec 06 07:28:12 crc kubenswrapper[4895]: I1206 07:28:12.323504 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" event={"ID":"0bfc2662-32ed-4e75-98d8-5fe472cb5052","Type":"ContainerDied","Data":"292daefa98d3df02d99150321d8e332021999a759f4f93641a98fd9843975bb0"} Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.341777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" event={"ID":"0bfc2662-32ed-4e75-98d8-5fe472cb5052","Type":"ContainerDied","Data":"f4c4ca4fb2f8d1126b9bb2f89ebce1bd9c6df45a02dfd034fcf722f3c124cff2"} Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.342801 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c4ca4fb2f8d1126b9bb2f89ebce1bd9c6df45a02dfd034fcf722f3c124cff2" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.345081 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerStarted","Data":"4808cb792c62294d39c977aefb5cb8268ad6c60bbc3edf144979d5cc5695156b"} Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.345299 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-central-agent" containerID="cri-o://fec9e5561b45801a9e55ec483952de760d6ab0c6ce9f74be5fea5947110fbafb" gracePeriod=30 Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.345332 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="proxy-httpd" containerID="cri-o://4808cb792c62294d39c977aefb5cb8268ad6c60bbc3edf144979d5cc5695156b" gracePeriod=30 Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.345351 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="sg-core" containerID="cri-o://9e3b3d400b5965412b246ff607dbfb7451d1a9bc8c8ada0e5e883b36eaa13388" gracePeriod=30 Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.345427 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-notification-agent" containerID="cri-o://e2dcf06ac510ec9920fde1765e856b2213aaab695eb2835a91815c4c3c1f45a0" gracePeriod=30 Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.345553 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.385447 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.938249965 podStartE2EDuration="6.38542471s" podCreationTimestamp="2025-12-06 07:28:07 +0000 UTC" firstStartedPulling="2025-12-06 07:28:08.200224787 +0000 UTC m=+1850.601613657" lastFinishedPulling="2025-12-06 07:28:12.647399532 +0000 UTC m=+1855.048788402" observedRunningTime="2025-12-06 07:28:13.378105536 +0000 UTC m=+1855.779494406" watchObservedRunningTime="2025-12-06 07:28:13.38542471 +0000 UTC m=+1855.786813580" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.452400 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.569072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-sb\") pod \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.569131 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-config\") pod \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.569238 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-nb\") pod \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.569321 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-swift-storage-0\") pod \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.569436 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-svc\") pod \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.569494 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cql9z\" (UniqueName: \"kubernetes.io/projected/0bfc2662-32ed-4e75-98d8-5fe472cb5052-kube-api-access-cql9z\") pod \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\" (UID: \"0bfc2662-32ed-4e75-98d8-5fe472cb5052\") " Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.575434 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bfc2662-32ed-4e75-98d8-5fe472cb5052-kube-api-access-cql9z" (OuterVolumeSpecName: "kube-api-access-cql9z") pod "0bfc2662-32ed-4e75-98d8-5fe472cb5052" (UID: "0bfc2662-32ed-4e75-98d8-5fe472cb5052"). InnerVolumeSpecName "kube-api-access-cql9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.640120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0bfc2662-32ed-4e75-98d8-5fe472cb5052" (UID: "0bfc2662-32ed-4e75-98d8-5fe472cb5052"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.640207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0bfc2662-32ed-4e75-98d8-5fe472cb5052" (UID: "0bfc2662-32ed-4e75-98d8-5fe472cb5052"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.640325 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-config" (OuterVolumeSpecName: "config") pod "0bfc2662-32ed-4e75-98d8-5fe472cb5052" (UID: "0bfc2662-32ed-4e75-98d8-5fe472cb5052"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.643771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0bfc2662-32ed-4e75-98d8-5fe472cb5052" (UID: "0bfc2662-32ed-4e75-98d8-5fe472cb5052"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.647260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0bfc2662-32ed-4e75-98d8-5fe472cb5052" (UID: "0bfc2662-32ed-4e75-98d8-5fe472cb5052"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.672389 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.672428 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.672438 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cql9z\" (UniqueName: \"kubernetes.io/projected/0bfc2662-32ed-4e75-98d8-5fe472cb5052-kube-api-access-cql9z\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.672452 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.672464 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:13 crc kubenswrapper[4895]: I1206 07:28:13.672496 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bfc2662-32ed-4e75-98d8-5fe472cb5052-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.356804 4895 generic.go:334] "Generic (PLEG): container finished" podID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerID="4808cb792c62294d39c977aefb5cb8268ad6c60bbc3edf144979d5cc5695156b" exitCode=0 Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.357133 4895 generic.go:334] "Generic (PLEG): container finished" podID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerID="9e3b3d400b5965412b246ff607dbfb7451d1a9bc8c8ada0e5e883b36eaa13388" exitCode=2 Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.357156 4895 generic.go:334] "Generic (PLEG): container finished" podID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerID="e2dcf06ac510ec9920fde1765e856b2213aaab695eb2835a91815c4c3c1f45a0" exitCode=0 Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.356877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerDied","Data":"4808cb792c62294d39c977aefb5cb8268ad6c60bbc3edf144979d5cc5695156b"} Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.357247 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.357267 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerDied","Data":"9e3b3d400b5965412b246ff607dbfb7451d1a9bc8c8ada0e5e883b36eaa13388"} Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.357285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerDied","Data":"e2dcf06ac510ec9920fde1765e856b2213aaab695eb2835a91815c4c3c1f45a0"} Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.401523 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7458fc9bff-s5gq4"] Dec 06 07:28:14 crc kubenswrapper[4895]: I1206 07:28:14.410023 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7458fc9bff-s5gq4"] Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.064886 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" path="/var/lib/kubelet/pods/0bfc2662-32ed-4e75-98d8-5fe472cb5052/volumes" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.393955 4895 generic.go:334] "Generic (PLEG): container finished" podID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerID="fec9e5561b45801a9e55ec483952de760d6ab0c6ce9f74be5fea5947110fbafb" exitCode=0 Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.394005 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerDied","Data":"fec9e5561b45801a9e55ec483952de760d6ab0c6ce9f74be5fea5947110fbafb"} Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.535044 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.629967 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-sg-core-conf-yaml\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.630072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-scripts\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.630106 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-combined-ca-bundle\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.630137 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghhcb\" (UniqueName: \"kubernetes.io/projected/75154756-b3e3-40c6-9fa3-fa8991c87b83-kube-api-access-ghhcb\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.630281 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-run-httpd\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.630872 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.630997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-log-httpd\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.631591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.631703 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-config-data\") pod \"75154756-b3e3-40c6-9fa3-fa8991c87b83\" (UID: \"75154756-b3e3-40c6-9fa3-fa8991c87b83\") " Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.632743 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.632758 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75154756-b3e3-40c6-9fa3-fa8991c87b83-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.653295 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75154756-b3e3-40c6-9fa3-fa8991c87b83-kube-api-access-ghhcb" (OuterVolumeSpecName: "kube-api-access-ghhcb") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "kube-api-access-ghhcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.654308 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-scripts" (OuterVolumeSpecName: "scripts") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.667078 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.731284 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.735130 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.735157 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.735168 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.735179 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghhcb\" (UniqueName: \"kubernetes.io/projected/75154756-b3e3-40c6-9fa3-fa8991c87b83-kube-api-access-ghhcb\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.750660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-config-data" (OuterVolumeSpecName: "config-data") pod "75154756-b3e3-40c6-9fa3-fa8991c87b83" (UID: "75154756-b3e3-40c6-9fa3-fa8991c87b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:16 crc kubenswrapper[4895]: I1206 07:28:16.836752 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75154756-b3e3-40c6-9fa3-fa8991c87b83-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.198937 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.411122 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75154756-b3e3-40c6-9fa3-fa8991c87b83","Type":"ContainerDied","Data":"be8dadd452f5f4f02776b163824f1d7db471619ac17b934288f4dda846d89f54"} Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.411180 4895 scope.go:117] "RemoveContainer" containerID="4808cb792c62294d39c977aefb5cb8268ad6c60bbc3edf144979d5cc5695156b" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.411296 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.447127 4895 scope.go:117] "RemoveContainer" containerID="9e3b3d400b5965412b246ff607dbfb7451d1a9bc8c8ada0e5e883b36eaa13388" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.450144 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.466647 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.473126 4895 scope.go:117] "RemoveContainer" containerID="e2dcf06ac510ec9920fde1765e856b2213aaab695eb2835a91815c4c3c1f45a0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.480376 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:17 crc kubenswrapper[4895]: E1206 07:28:17.480910 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="dnsmasq-dns" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.480933 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="dnsmasq-dns" Dec 06 07:28:17 crc kubenswrapper[4895]: E1206 07:28:17.480958 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-central-agent" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.480966 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-central-agent" Dec 06 07:28:17 crc kubenswrapper[4895]: E1206 07:28:17.480982 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-notification-agent" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.480990 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-notification-agent" Dec 06 07:28:17 crc kubenswrapper[4895]: E1206 07:28:17.481006 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="sg-core" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481013 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="sg-core" Dec 06 07:28:17 crc kubenswrapper[4895]: E1206 07:28:17.481032 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="proxy-httpd" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481040 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="proxy-httpd" Dec 06 07:28:17 crc kubenswrapper[4895]: E1206 07:28:17.481059 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="init" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481067 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="init" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481299 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="dnsmasq-dns" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481321 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-notification-agent" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481332 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="ceilometer-central-agent" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481355 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="proxy-httpd" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.481369 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" containerName="sg-core" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.483766 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.487585 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.488158 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.493665 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nk2j\" (UniqueName: \"kubernetes.io/projected/b687fd25-e062-440b-bb78-374c89f8cc40-kube-api-access-4nk2j\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547736 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-run-httpd\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547815 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-config-data\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547837 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-log-httpd\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547863 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547921 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-scripts\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.547961 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.556680 4895 scope.go:117] "RemoveContainer" containerID="fec9e5561b45801a9e55ec483952de760d6ab0c6ce9f74be5fea5947110fbafb" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.650672 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-config-data\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.650727 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-log-httpd\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.650770 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.650859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-scripts\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.650911 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.650973 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nk2j\" (UniqueName: \"kubernetes.io/projected/b687fd25-e062-440b-bb78-374c89f8cc40-kube-api-access-4nk2j\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.651012 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-run-httpd\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.651563 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-run-httpd\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.653251 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-log-httpd\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.657507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-scripts\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.657726 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.657847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.660143 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-config-data\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.680225 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nk2j\" (UniqueName: \"kubernetes.io/projected/b687fd25-e062-440b-bb78-374c89f8cc40-kube-api-access-4nk2j\") pod \"ceilometer-0\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " pod="openstack/ceilometer-0" Dec 06 07:28:17 crc kubenswrapper[4895]: I1206 07:28:17.864677 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:28:18 crc kubenswrapper[4895]: I1206 07:28:18.063296 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75154756-b3e3-40c6-9fa3-fa8991c87b83" path="/var/lib/kubelet/pods/75154756-b3e3-40c6-9fa3-fa8991c87b83/volumes" Dec 06 07:28:18 crc kubenswrapper[4895]: I1206 07:28:18.339136 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7458fc9bff-s5gq4" podUID="0bfc2662-32ed-4e75-98d8-5fe472cb5052" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Dec 06 07:28:20 crc kubenswrapper[4895]: I1206 07:28:20.451029 4895 trace.go:236] Trace[1412272995]: "Calculate volume metrics of ovsdbserver-nb for pod openstack/dnsmasq-dns-56d54d44c7-cvzmv" (06-Dec-2025 07:28:18.011) (total time: 2439ms): Dec 06 07:28:20 crc kubenswrapper[4895]: Trace[1412272995]: [2.439732817s] [2.439732817s] END Dec 06 07:28:20 crc kubenswrapper[4895]: E1206 07:28:20.453636 4895 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.404s" Dec 06 07:28:20 crc kubenswrapper[4895]: I1206 07:28:20.480310 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:28:20 crc kubenswrapper[4895]: I1206 07:28:20.480892 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:20 crc kubenswrapper[4895]: I1206 07:28:20.567148 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66cfcbd96d-5bdvp"] Dec 06 07:28:20 crc kubenswrapper[4895]: I1206 07:28:20.567429 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66cfcbd96d-5bdvp" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-api" containerID="cri-o://4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa" gracePeriod=30 Dec 06 07:28:20 crc kubenswrapper[4895]: I1206 07:28:20.567590 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66cfcbd96d-5bdvp" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-httpd" containerID="cri-o://35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a" gracePeriod=30 Dec 06 07:28:21 crc kubenswrapper[4895]: I1206 07:28:21.493247 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerStarted","Data":"0d12728e4b7498c0cb5bc4ee339e55b83de68669d3ecdefec56a2a136c17aa41"} Dec 06 07:28:21 crc kubenswrapper[4895]: I1206 07:28:21.495517 4895 generic.go:334] "Generic (PLEG): container finished" podID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerID="35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a" exitCode=0 Dec 06 07:28:21 crc kubenswrapper[4895]: I1206 07:28:21.495589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cfcbd96d-5bdvp" event={"ID":"8fe8c4d3-bd90-49d9-8828-32563e4a1a90","Type":"ContainerDied","Data":"35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a"} Dec 06 07:28:22 crc kubenswrapper[4895]: I1206 07:28:22.513630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerStarted","Data":"b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f"} Dec 06 07:28:25 crc kubenswrapper[4895]: I1206 07:28:25.050709 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:28:25 crc kubenswrapper[4895]: E1206 07:28:25.051397 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.561761 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.616638 4895 generic.go:334] "Generic (PLEG): container finished" podID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerID="4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa" exitCode=0 Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.616696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cfcbd96d-5bdvp" event={"ID":"8fe8c4d3-bd90-49d9-8828-32563e4a1a90","Type":"ContainerDied","Data":"4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa"} Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.616731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66cfcbd96d-5bdvp" event={"ID":"8fe8c4d3-bd90-49d9-8828-32563e4a1a90","Type":"ContainerDied","Data":"a1f5433698f02ffac347251f9e1b0e2e9dcd0806cb4e6b4abc43fd1a34d797ac"} Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.616754 4895 scope.go:117] "RemoveContainer" containerID="35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.616914 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66cfcbd96d-5bdvp" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.644300 4895 scope.go:117] "RemoveContainer" containerID="4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.665015 4895 scope.go:117] "RemoveContainer" containerID="35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a" Dec 06 07:28:30 crc kubenswrapper[4895]: E1206 07:28:30.665437 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a\": container with ID starting with 35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a not found: ID does not exist" containerID="35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.665534 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a"} err="failed to get container status \"35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a\": rpc error: code = NotFound desc = could not find container \"35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a\": container with ID starting with 35741cf889186b74b03a3204fc7306a1ceec82264e4188202adb61c67ce79c9a not found: ID does not exist" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.665570 4895 scope.go:117] "RemoveContainer" containerID="4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa" Dec 06 07:28:30 crc kubenswrapper[4895]: E1206 07:28:30.666004 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa\": container with ID starting with 4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa not found: ID does not exist" containerID="4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.666033 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa"} err="failed to get container status \"4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa\": rpc error: code = NotFound desc = could not find container \"4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa\": container with ID starting with 4cfddfc57a421d2e00a212cba64c2a5339b4d9ac1ddbc38da8f9a1db4ecbb2fa not found: ID does not exist" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.682359 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-ovndb-tls-certs\") pod \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.682429 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-config\") pod \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.682463 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqs4f\" (UniqueName: \"kubernetes.io/projected/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-kube-api-access-nqs4f\") pod \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.682600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-combined-ca-bundle\") pod \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.682713 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-httpd-config\") pod \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\" (UID: \"8fe8c4d3-bd90-49d9-8828-32563e4a1a90\") " Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.688314 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8fe8c4d3-bd90-49d9-8828-32563e4a1a90" (UID: "8fe8c4d3-bd90-49d9-8828-32563e4a1a90"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.698402 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-kube-api-access-nqs4f" (OuterVolumeSpecName: "kube-api-access-nqs4f") pod "8fe8c4d3-bd90-49d9-8828-32563e4a1a90" (UID: "8fe8c4d3-bd90-49d9-8828-32563e4a1a90"). InnerVolumeSpecName "kube-api-access-nqs4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.736627 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fe8c4d3-bd90-49d9-8828-32563e4a1a90" (UID: "8fe8c4d3-bd90-49d9-8828-32563e4a1a90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.741735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-config" (OuterVolumeSpecName: "config") pod "8fe8c4d3-bd90-49d9-8828-32563e4a1a90" (UID: "8fe8c4d3-bd90-49d9-8828-32563e4a1a90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.753579 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8fe8c4d3-bd90-49d9-8828-32563e4a1a90" (UID: "8fe8c4d3-bd90-49d9-8828-32563e4a1a90"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.785657 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.785703 4895 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.785716 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.785726 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqs4f\" (UniqueName: \"kubernetes.io/projected/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-kube-api-access-nqs4f\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.785736 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe8c4d3-bd90-49d9-8828-32563e4a1a90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.989828 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66cfcbd96d-5bdvp"] Dec 06 07:28:30 crc kubenswrapper[4895]: I1206 07:28:30.997843 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66cfcbd96d-5bdvp"] Dec 06 07:28:31 crc kubenswrapper[4895]: I1206 07:28:31.632747 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerStarted","Data":"801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61"} Dec 06 07:28:32 crc kubenswrapper[4895]: I1206 07:28:32.060963 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" path="/var/lib/kubelet/pods/8fe8c4d3-bd90-49d9-8828-32563e4a1a90/volumes" Dec 06 07:28:33 crc kubenswrapper[4895]: I1206 07:28:33.778775 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:28:40 crc kubenswrapper[4895]: I1206 07:28:40.050769 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:28:40 crc kubenswrapper[4895]: I1206 07:28:40.432766 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.169:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:28:43 crc kubenswrapper[4895]: I1206 07:28:43.785632 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"1afcdb14b177bc99b4d67f898f37ab6806e81e208403fb192e71d61458db3cfa"} Dec 06 07:28:43 crc kubenswrapper[4895]: I1206 07:28:43.789221 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerStarted","Data":"8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90"} Dec 06 07:28:51 crc kubenswrapper[4895]: I1206 07:28:51.861113 4895 scope.go:117] "RemoveContainer" containerID="901a82362dfab6727871904d5dfc6172f99ee2967f942712fc164c74972b6553" Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.061875 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-central-agent" containerID="cri-o://b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f" gracePeriod=30 Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.061988 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="proxy-httpd" containerID="cri-o://c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6" gracePeriod=30 Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.062037 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="sg-core" containerID="cri-o://8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90" gracePeriod=30 Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.062047 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-notification-agent" containerID="cri-o://801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61" gracePeriod=30 Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.066972 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.067018 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerStarted","Data":"c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6"} Dec 06 07:29:04 crc kubenswrapper[4895]: I1206 07:29:04.096808 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.622443739 podStartE2EDuration="47.096784676s" podCreationTimestamp="2025-12-06 07:28:17 +0000 UTC" firstStartedPulling="2025-12-06 07:28:20.467467523 +0000 UTC m=+1862.868856393" lastFinishedPulling="2025-12-06 07:29:01.94180846 +0000 UTC m=+1904.343197330" observedRunningTime="2025-12-06 07:29:04.092013329 +0000 UTC m=+1906.493402219" watchObservedRunningTime="2025-12-06 07:29:04.096784676 +0000 UTC m=+1906.498173566" Dec 06 07:29:06 crc kubenswrapper[4895]: I1206 07:29:06.088684 4895 generic.go:334] "Generic (PLEG): container finished" podID="b687fd25-e062-440b-bb78-374c89f8cc40" containerID="c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6" exitCode=0 Dec 06 07:29:06 crc kubenswrapper[4895]: I1206 07:29:06.089031 4895 generic.go:334] "Generic (PLEG): container finished" podID="b687fd25-e062-440b-bb78-374c89f8cc40" containerID="8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90" exitCode=2 Dec 06 07:29:06 crc kubenswrapper[4895]: I1206 07:29:06.088800 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerDied","Data":"c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6"} Dec 06 07:29:06 crc kubenswrapper[4895]: I1206 07:29:06.089096 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerDied","Data":"8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90"} Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.099958 4895 generic.go:334] "Generic (PLEG): container finished" podID="b687fd25-e062-440b-bb78-374c89f8cc40" containerID="b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f" exitCode=0 Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.100082 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerDied","Data":"b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f"} Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.875873 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.932594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-log-httpd\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.932960 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-config-data\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933041 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nk2j\" (UniqueName: \"kubernetes.io/projected/b687fd25-e062-440b-bb78-374c89f8cc40-kube-api-access-4nk2j\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933127 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-combined-ca-bundle\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933155 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-run-httpd\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933210 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-scripts\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933237 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-sg-core-conf-yaml\") pod \"b687fd25-e062-440b-bb78-374c89f8cc40\" (UID: \"b687fd25-e062-440b-bb78-374c89f8cc40\") " Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.933757 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.934279 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.939607 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b687fd25-e062-440b-bb78-374c89f8cc40-kube-api-access-4nk2j" (OuterVolumeSpecName: "kube-api-access-4nk2j") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "kube-api-access-4nk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.941466 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-scripts" (OuterVolumeSpecName: "scripts") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:07 crc kubenswrapper[4895]: I1206 07:29:07.963614 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.014773 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.035551 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nk2j\" (UniqueName: \"kubernetes.io/projected/b687fd25-e062-440b-bb78-374c89f8cc40-kube-api-access-4nk2j\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.035590 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.035600 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b687fd25-e062-440b-bb78-374c89f8cc40-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.035609 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.035621 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.036186 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-config-data" (OuterVolumeSpecName: "config-data") pod "b687fd25-e062-440b-bb78-374c89f8cc40" (UID: "b687fd25-e062-440b-bb78-374c89f8cc40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.118046 4895 generic.go:334] "Generic (PLEG): container finished" podID="b687fd25-e062-440b-bb78-374c89f8cc40" containerID="801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61" exitCode=0 Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.118093 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerDied","Data":"801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61"} Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.118123 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b687fd25-e062-440b-bb78-374c89f8cc40","Type":"ContainerDied","Data":"0d12728e4b7498c0cb5bc4ee339e55b83de68669d3ecdefec56a2a136c17aa41"} Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.118139 4895 scope.go:117] "RemoveContainer" containerID="c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.118256 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.139839 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b687fd25-e062-440b-bb78-374c89f8cc40-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.148405 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.156011 4895 scope.go:117] "RemoveContainer" containerID="8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.157845 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.178525 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.178937 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-httpd" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.178953 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-httpd" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.178969 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-notification-agent" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.178976 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-notification-agent" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.178989 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-central-agent" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.178995 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-central-agent" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.179007 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-api" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179012 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-api" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.179025 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="sg-core" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179032 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="sg-core" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.179056 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="proxy-httpd" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179062 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="proxy-httpd" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179225 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="proxy-httpd" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179237 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-httpd" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179246 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe8c4d3-bd90-49d9-8828-32563e4a1a90" containerName="neutron-api" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179259 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-notification-agent" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179275 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="sg-core" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.179282 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" containerName="ceilometer-central-agent" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.181044 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.184088 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.184294 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.199881 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.210413 4895 scope.go:117] "RemoveContainer" containerID="801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.228618 4895 scope.go:117] "RemoveContainer" containerID="b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.241927 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmpp\" (UniqueName: \"kubernetes.io/projected/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-kube-api-access-fvmpp\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.242093 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-run-httpd\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.242122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-scripts\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.242181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.242210 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-config-data\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.242538 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.242582 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-log-httpd\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.251057 4895 scope.go:117] "RemoveContainer" containerID="c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.251484 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6\": container with ID starting with c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6 not found: ID does not exist" containerID="c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.251526 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6"} err="failed to get container status \"c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6\": rpc error: code = NotFound desc = could not find container \"c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6\": container with ID starting with c484ca6fbdd363814a97e83742c9c5406cbcc88d6dc087c4c2d3259684f85fa6 not found: ID does not exist" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.251547 4895 scope.go:117] "RemoveContainer" containerID="8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.251872 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90\": container with ID starting with 8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90 not found: ID does not exist" containerID="8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.251921 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90"} err="failed to get container status \"8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90\": rpc error: code = NotFound desc = could not find container \"8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90\": container with ID starting with 8d475a376fd8419ee3e1d7f8f86cd4b983c8a3cc1c3b8e4b31eb1787bf53ad90 not found: ID does not exist" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.251939 4895 scope.go:117] "RemoveContainer" containerID="801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.252289 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61\": container with ID starting with 801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61 not found: ID does not exist" containerID="801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.252337 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61"} err="failed to get container status \"801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61\": rpc error: code = NotFound desc = could not find container \"801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61\": container with ID starting with 801f284575aee6393c1778525ed1bfaee4d8e14d4dab2aeca52a628f9519fc61 not found: ID does not exist" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.252367 4895 scope.go:117] "RemoveContainer" containerID="b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f" Dec 06 07:29:08 crc kubenswrapper[4895]: E1206 07:29:08.252662 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f\": container with ID starting with b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f not found: ID does not exist" containerID="b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.252685 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f"} err="failed to get container status \"b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f\": rpc error: code = NotFound desc = could not find container \"b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f\": container with ID starting with b89b6b84d7702ba78064094a84a2d2c5079cd4da6b881e305a0fec2fc4f42a7f not found: ID does not exist" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344387 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-log-httpd\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmpp\" (UniqueName: \"kubernetes.io/projected/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-kube-api-access-fvmpp\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-run-httpd\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-scripts\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.344787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-config-data\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.345796 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-run-httpd\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.346162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-log-httpd\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.348529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.351068 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-config-data\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.352454 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.353553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-scripts\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.365199 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmpp\" (UniqueName: \"kubernetes.io/projected/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-kube-api-access-fvmpp\") pod \"ceilometer-0\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.510386 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:29:08 crc kubenswrapper[4895]: I1206 07:29:08.924921 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:29:09 crc kubenswrapper[4895]: I1206 07:29:09.128118 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerStarted","Data":"c614d64c3d7bcea57647e1db047db8af9ff41810c5b8e1874470bb8df32545a3"} Dec 06 07:29:10 crc kubenswrapper[4895]: I1206 07:29:10.088901 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b687fd25-e062-440b-bb78-374c89f8cc40" path="/var/lib/kubelet/pods/b687fd25-e062-440b-bb78-374c89f8cc40/volumes" Dec 06 07:29:11 crc kubenswrapper[4895]: I1206 07:29:11.147860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerStarted","Data":"cff324c1edc754d222d39305fc82e0b72b3d6b38ae83ce629a9097130ce185f7"} Dec 06 07:29:18 crc kubenswrapper[4895]: I1206 07:29:18.240177 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerStarted","Data":"3c3886d87bb3c09ebadd92c3323c573f2e312c60b0b2677b3c3be0e56a8667c3"} Dec 06 07:29:20 crc kubenswrapper[4895]: I1206 07:29:20.274808 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerStarted","Data":"fd8ef63afbf61f4fd5dc8446e2f11ba771f90b3ca1a6f03523436a8c8bc6d218"} Dec 06 07:29:31 crc kubenswrapper[4895]: I1206 07:29:31.417037 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerStarted","Data":"d3c55af038dea0efe97ad29d893a8f23eae09485f87076348bb68b1c698a7826"} Dec 06 07:29:31 crc kubenswrapper[4895]: I1206 07:29:31.418670 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:29:31 crc kubenswrapper[4895]: I1206 07:29:31.419733 4895 generic.go:334] "Generic (PLEG): container finished" podID="51d931e2-40e6-4bb5-8b4f-3252852effd0" containerID="f570d031351040cb2fe03dc3851e1c34de085a50efeef7dec9fb4b7808929814" exitCode=0 Dec 06 07:29:31 crc kubenswrapper[4895]: I1206 07:29:31.419807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" event={"ID":"51d931e2-40e6-4bb5-8b4f-3252852effd0","Type":"ContainerDied","Data":"f570d031351040cb2fe03dc3851e1c34de085a50efeef7dec9fb4b7808929814"} Dec 06 07:29:31 crc kubenswrapper[4895]: I1206 07:29:31.460240 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.210572357 podStartE2EDuration="23.460219519s" podCreationTimestamp="2025-12-06 07:29:08 +0000 UTC" firstStartedPulling="2025-12-06 07:29:08.930604348 +0000 UTC m=+1911.331993218" lastFinishedPulling="2025-12-06 07:29:30.18025151 +0000 UTC m=+1932.581640380" observedRunningTime="2025-12-06 07:29:31.442269154 +0000 UTC m=+1933.843658014" watchObservedRunningTime="2025-12-06 07:29:31.460219519 +0000 UTC m=+1933.861608389" Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.798303 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.951815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-config-data\") pod \"51d931e2-40e6-4bb5-8b4f-3252852effd0\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.951865 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-scripts\") pod \"51d931e2-40e6-4bb5-8b4f-3252852effd0\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.951883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-combined-ca-bundle\") pod \"51d931e2-40e6-4bb5-8b4f-3252852effd0\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.951909 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqx5\" (UniqueName: \"kubernetes.io/projected/51d931e2-40e6-4bb5-8b4f-3252852effd0-kube-api-access-tkqx5\") pod \"51d931e2-40e6-4bb5-8b4f-3252852effd0\" (UID: \"51d931e2-40e6-4bb5-8b4f-3252852effd0\") " Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.957167 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-scripts" (OuterVolumeSpecName: "scripts") pod "51d931e2-40e6-4bb5-8b4f-3252852effd0" (UID: "51d931e2-40e6-4bb5-8b4f-3252852effd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.957702 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d931e2-40e6-4bb5-8b4f-3252852effd0-kube-api-access-tkqx5" (OuterVolumeSpecName: "kube-api-access-tkqx5") pod "51d931e2-40e6-4bb5-8b4f-3252852effd0" (UID: "51d931e2-40e6-4bb5-8b4f-3252852effd0"). InnerVolumeSpecName "kube-api-access-tkqx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.978198 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51d931e2-40e6-4bb5-8b4f-3252852effd0" (UID: "51d931e2-40e6-4bb5-8b4f-3252852effd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:32 crc kubenswrapper[4895]: I1206 07:29:32.995447 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-config-data" (OuterVolumeSpecName: "config-data") pod "51d931e2-40e6-4bb5-8b4f-3252852effd0" (UID: "51d931e2-40e6-4bb5-8b4f-3252852effd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.055036 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.055080 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.055090 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d931e2-40e6-4bb5-8b4f-3252852effd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.055099 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqx5\" (UniqueName: \"kubernetes.io/projected/51d931e2-40e6-4bb5-8b4f-3252852effd0-kube-api-access-tkqx5\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.438820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" event={"ID":"51d931e2-40e6-4bb5-8b4f-3252852effd0","Type":"ContainerDied","Data":"4b2d6a27399b97948bb573591d577658044780c2466723964982a36b4116ae17"} Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.438881 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2d6a27399b97948bb573591d577658044780c2466723964982a36b4116ae17" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.438843 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zt8v2" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.574211 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 07:29:33 crc kubenswrapper[4895]: E1206 07:29:33.574732 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d931e2-40e6-4bb5-8b4f-3252852effd0" containerName="nova-cell0-conductor-db-sync" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.574754 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d931e2-40e6-4bb5-8b4f-3252852effd0" containerName="nova-cell0-conductor-db-sync" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.574988 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d931e2-40e6-4bb5-8b4f-3252852effd0" containerName="nova-cell0-conductor-db-sync" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.575741 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.580048 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hwxtr" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.580108 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.583601 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.666725 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n6sg\" (UniqueName: \"kubernetes.io/projected/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-kube-api-access-4n6sg\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.666893 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.667006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.769158 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.769269 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.769352 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n6sg\" (UniqueName: \"kubernetes.io/projected/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-kube-api-access-4n6sg\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.773977 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.777300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.789787 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n6sg\" (UniqueName: \"kubernetes.io/projected/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-kube-api-access-4n6sg\") pod \"nova-cell0-conductor-0\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:33 crc kubenswrapper[4895]: I1206 07:29:33.895720 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:34 crc kubenswrapper[4895]: I1206 07:29:34.332557 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 07:29:34 crc kubenswrapper[4895]: W1206 07:29:34.336780 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb9a60a7_bd12_495d_b0c3_feebe0f65bf8.slice/crio-edfc2eecd9d867b0e6306a947b9bc9010fa85d4d32675ac6bd6c3a9d131c6de3 WatchSource:0}: Error finding container edfc2eecd9d867b0e6306a947b9bc9010fa85d4d32675ac6bd6c3a9d131c6de3: Status 404 returned error can't find the container with id edfc2eecd9d867b0e6306a947b9bc9010fa85d4d32675ac6bd6c3a9d131c6de3 Dec 06 07:29:34 crc kubenswrapper[4895]: I1206 07:29:34.455108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8","Type":"ContainerStarted","Data":"edfc2eecd9d867b0e6306a947b9bc9010fa85d4d32675ac6bd6c3a9d131c6de3"} Dec 06 07:29:35 crc kubenswrapper[4895]: I1206 07:29:35.470521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8","Type":"ContainerStarted","Data":"095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc"} Dec 06 07:29:35 crc kubenswrapper[4895]: I1206 07:29:35.472729 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:35 crc kubenswrapper[4895]: I1206 07:29:35.506906 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.5068723889999998 podStartE2EDuration="2.506872389s" podCreationTimestamp="2025-12-06 07:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:29:35.496304188 +0000 UTC m=+1937.897693078" watchObservedRunningTime="2025-12-06 07:29:35.506872389 +0000 UTC m=+1937.908261269" Dec 06 07:29:43 crc kubenswrapper[4895]: I1206 07:29:43.930149 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.445337 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-twdnd"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.446860 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.456224 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-twdnd"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.457887 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.458305 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.582533 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.584170 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.588602 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.590974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-scripts\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.591038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dsvv\" (UniqueName: \"kubernetes.io/projected/9351ace1-bec9-4251-866f-72d283f59ec3-kube-api-access-7dsvv\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.591136 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-config-data\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.591169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.595687 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.661675 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.663654 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.671355 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.679071 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.696735 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dsvv\" (UniqueName: \"kubernetes.io/projected/9351ace1-bec9-4251-866f-72d283f59ec3-kube-api-access-7dsvv\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.696812 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-config-data\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.696859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.696919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-config-data\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.696960 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklfv\" (UniqueName: \"kubernetes.io/projected/79ad1ed9-7218-46e9-b070-063abf764a57-kube-api-access-xklfv\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.697035 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.697073 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-scripts\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.706251 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-config-data\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.708125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.708562 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-scripts\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.724505 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dsvv\" (UniqueName: \"kubernetes.io/projected/9351ace1-bec9-4251-866f-72d283f59ec3-kube-api-access-7dsvv\") pod \"nova-cell0-cell-mapping-twdnd\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.770662 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.772558 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.774628 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.778458 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.785427 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.798844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-config-data\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.798910 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xklfv\" (UniqueName: \"kubernetes.io/projected/79ad1ed9-7218-46e9-b070-063abf764a57-kube-api-access-xklfv\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.798964 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.798994 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783ab44f-8f99-41a8-8976-1f209dfa78e3-logs\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.799035 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.799054 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.799094 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8v98\" (UniqueName: \"kubernetes.io/projected/783ab44f-8f99-41a8-8976-1f209dfa78e3-kube-api-access-n8v98\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.806459 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.808660 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-config-data\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.853146 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xklfv\" (UniqueName: \"kubernetes.io/projected/79ad1ed9-7218-46e9-b070-063abf764a57-kube-api-access-xklfv\") pod \"nova-scheduler-0\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.898545 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.900348 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901240 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783ab44f-8f99-41a8-8976-1f209dfa78e3-logs\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901320 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmv5l\" (UniqueName: \"kubernetes.io/projected/1f14901e-3580-4f9b-9569-b69f6c04c98f-kube-api-access-dmv5l\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901361 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901390 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8v98\" (UniqueName: \"kubernetes.io/projected/783ab44f-8f99-41a8-8976-1f209dfa78e3-kube-api-access-n8v98\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901585 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-config-data\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901649 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901684 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14901e-3580-4f9b-9569-b69f6c04c98f-logs\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.901941 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783ab44f-8f99-41a8-8976-1f209dfa78e3-logs\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.910032 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.919126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.922682 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.932101 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.939914 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.943152 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8v98\" (UniqueName: \"kubernetes.io/projected/783ab44f-8f99-41a8-8976-1f209dfa78e3-kube-api-access-n8v98\") pod \"nova-api-0\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " pod="openstack/nova-api-0" Dec 06 07:29:44 crc kubenswrapper[4895]: I1206 07:29:44.997590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.003741 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.003790 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkkvn\" (UniqueName: \"kubernetes.io/projected/718c90e3-fda8-453f-95d8-e66acce49d16-kube-api-access-jkkvn\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.003839 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-config-data\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.003885 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.003915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14901e-3580-4f9b-9569-b69f6c04c98f-logs\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.004039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.004100 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmv5l\" (UniqueName: \"kubernetes.io/projected/1f14901e-3580-4f9b-9569-b69f6c04c98f-kube-api-access-dmv5l\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.005726 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14901e-3580-4f9b-9569-b69f6c04c98f-logs\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.013148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.016645 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-vb7hb"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.018393 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-config-data\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.027232 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.044073 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmv5l\" (UniqueName: \"kubernetes.io/projected/1f14901e-3580-4f9b-9569-b69f6c04c98f-kube-api-access-dmv5l\") pod \"nova-metadata-0\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.076945 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-vb7hb"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.107594 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.108182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.108813 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkkvn\" (UniqueName: \"kubernetes.io/projected/718c90e3-fda8-453f-95d8-e66acce49d16-kube-api-access-jkkvn\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.108839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-svc\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.108917 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.109001 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.109097 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.109128 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7sbf\" (UniqueName: \"kubernetes.io/projected/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-kube-api-access-f7sbf\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.109175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-config\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.122538 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.125119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.142347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkkvn\" (UniqueName: \"kubernetes.io/projected/718c90e3-fda8-453f-95d8-e66acce49d16-kube-api-access-jkkvn\") pod \"nova-cell1-novncproxy-0\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.211703 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.211834 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7sbf\" (UniqueName: \"kubernetes.io/projected/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-kube-api-access-f7sbf\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.211896 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-config\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.211943 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.211974 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-svc\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.212039 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.213550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.214290 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-config\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.214398 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.214880 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-svc\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.214953 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.233049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7sbf\" (UniqueName: \"kubernetes.io/projected/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-kube-api-access-f7sbf\") pod \"dnsmasq-dns-7bd87576bf-vb7hb\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.276248 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.291730 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.364169 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.539497 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-twdnd"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.555255 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.590305 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.596094 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"783ab44f-8f99-41a8-8976-1f209dfa78e3","Type":"ContainerStarted","Data":"c0c7f5e364808807de7a0112bd5c3f6f76bffa5d1a2b6ebe3954b012aa1ebbb1"} Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.604492 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-twdnd" event={"ID":"9351ace1-bec9-4251-866f-72d283f59ec3","Type":"ContainerStarted","Data":"aa547df8a4ea434de24ff27a76a6faba7539ce92c42bd593c867f77d5c633811"} Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.688350 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hmw66"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.701109 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.704679 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.705939 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.718516 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hmw66"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.841465 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-config-data\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.841602 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6hns\" (UniqueName: \"kubernetes.io/projected/faea84b4-6c7f-4d6c-b42e-14ba117920d1-kube-api-access-h6hns\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.841638 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-scripts\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.841689 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.946858 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-scripts\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.946961 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.947039 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-config-data\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.947165 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6hns\" (UniqueName: \"kubernetes.io/projected/faea84b4-6c7f-4d6c-b42e-14ba117920d1-kube-api-access-h6hns\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.953602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.964718 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.981022 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-scripts\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:45 crc kubenswrapper[4895]: I1206 07:29:45.999925 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-config-data\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.000270 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6hns\" (UniqueName: \"kubernetes.io/projected/faea84b4-6c7f-4d6c-b42e-14ba117920d1-kube-api-access-h6hns\") pod \"nova-cell1-conductor-db-sync-hmw66\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.036339 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.135000 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.147340 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-vb7hb"] Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.627292 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f14901e-3580-4f9b-9569-b69f6c04c98f","Type":"ContainerStarted","Data":"c5c80d7e9381e3c918b50e2c4d15e5ef401fba586703ffec75ce51711d8dd610"} Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.633379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79ad1ed9-7218-46e9-b070-063abf764a57","Type":"ContainerStarted","Data":"164b7f69e7984c42bb9ee744a0ba49fa2272a6fef7b25eb256b4370e8313453d"} Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.636645 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"718c90e3-fda8-453f-95d8-e66acce49d16","Type":"ContainerStarted","Data":"095f19aff202e7ed5293a0854d04a6ae6fc50034bee8bea3cac4e89531637e90"} Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.639082 4895 generic.go:334] "Generic (PLEG): container finished" podID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerID="a0a6f82cbf80eac9d14afed35ceed4d1ff024109c579614d75c4c7bef85ed700" exitCode=0 Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.639134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" event={"ID":"3a782179-04fe-4b9c-a05a-27f14cb5ddf6","Type":"ContainerDied","Data":"a0a6f82cbf80eac9d14afed35ceed4d1ff024109c579614d75c4c7bef85ed700"} Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.639152 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" event={"ID":"3a782179-04fe-4b9c-a05a-27f14cb5ddf6","Type":"ContainerStarted","Data":"fca4044c338114527389d898a677d162105b43c6612aa37374ee2e3110d4bd7f"} Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.641254 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-twdnd" event={"ID":"9351ace1-bec9-4251-866f-72d283f59ec3","Type":"ContainerStarted","Data":"6d8f3f62634430e83f765a55c482bfd6b56e3df61d8617990a8f46860d0e2b70"} Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.665754 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hmw66"] Dec 06 07:29:46 crc kubenswrapper[4895]: W1206 07:29:46.678538 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaea84b4_6c7f_4d6c_b42e_14ba117920d1.slice/crio-14752abb2abcc42d0d08426ba29f44c4e3b0a6823809649a7a22045e673f86be WatchSource:0}: Error finding container 14752abb2abcc42d0d08426ba29f44c4e3b0a6823809649a7a22045e673f86be: Status 404 returned error can't find the container with id 14752abb2abcc42d0d08426ba29f44c4e3b0a6823809649a7a22045e673f86be Dec 06 07:29:46 crc kubenswrapper[4895]: I1206 07:29:46.726354 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-twdnd" podStartSLOduration=2.726327161 podStartE2EDuration="2.726327161s" podCreationTimestamp="2025-12-06 07:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:29:46.681064478 +0000 UTC m=+1949.082453378" watchObservedRunningTime="2025-12-06 07:29:46.726327161 +0000 UTC m=+1949.127716041" Dec 06 07:29:47 crc kubenswrapper[4895]: I1206 07:29:47.664923 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" event={"ID":"3a782179-04fe-4b9c-a05a-27f14cb5ddf6","Type":"ContainerStarted","Data":"6efe3cd9ffddc1dd6a5a17f97c93720554708e478eff343374cae1421f501af3"} Dec 06 07:29:47 crc kubenswrapper[4895]: I1206 07:29:47.665576 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:47 crc kubenswrapper[4895]: I1206 07:29:47.668694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hmw66" event={"ID":"faea84b4-6c7f-4d6c-b42e-14ba117920d1","Type":"ContainerStarted","Data":"140b2e29bd8d69af6c9e4cdff4cf64a89681dd2e9b425d6a583968de0bca4fd3"} Dec 06 07:29:47 crc kubenswrapper[4895]: I1206 07:29:47.668725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hmw66" event={"ID":"faea84b4-6c7f-4d6c-b42e-14ba117920d1","Type":"ContainerStarted","Data":"14752abb2abcc42d0d08426ba29f44c4e3b0a6823809649a7a22045e673f86be"} Dec 06 07:29:47 crc kubenswrapper[4895]: I1206 07:29:47.689963 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" podStartSLOduration=3.6899404689999997 podStartE2EDuration="3.689940469s" podCreationTimestamp="2025-12-06 07:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:29:47.688630705 +0000 UTC m=+1950.090019575" watchObservedRunningTime="2025-12-06 07:29:47.689940469 +0000 UTC m=+1950.091329339" Dec 06 07:29:47 crc kubenswrapper[4895]: I1206 07:29:47.712825 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hmw66" podStartSLOduration=2.712807756 podStartE2EDuration="2.712807756s" podCreationTimestamp="2025-12-06 07:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:29:47.706103118 +0000 UTC m=+1950.107491988" watchObservedRunningTime="2025-12-06 07:29:47.712807756 +0000 UTC m=+1950.114196626" Dec 06 07:29:48 crc kubenswrapper[4895]: I1206 07:29:48.356355 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:48 crc kubenswrapper[4895]: I1206 07:29:48.370868 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.717248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f14901e-3580-4f9b-9569-b69f6c04c98f","Type":"ContainerStarted","Data":"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39"} Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.717820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f14901e-3580-4f9b-9569-b69f6c04c98f","Type":"ContainerStarted","Data":"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2"} Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.717643 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-log" containerID="cri-o://cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2" gracePeriod=30 Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.718193 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-metadata" containerID="cri-o://d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39" gracePeriod=30 Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.721530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79ad1ed9-7218-46e9-b070-063abf764a57","Type":"ContainerStarted","Data":"4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56"} Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.731446 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"718c90e3-fda8-453f-95d8-e66acce49d16","Type":"ContainerStarted","Data":"1e67b077e7567d51c2f6de82f65dfb793823d102ab154c9c444900c95ee3a36f"} Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.731597 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="718c90e3-fda8-453f-95d8-e66acce49d16" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1e67b077e7567d51c2f6de82f65dfb793823d102ab154c9c444900c95ee3a36f" gracePeriod=30 Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.740908 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"783ab44f-8f99-41a8-8976-1f209dfa78e3","Type":"ContainerStarted","Data":"613a73c4e6e23364cc9af04ec4bd246e0f2658fbe2ac03e2783159ec09c7e85e"} Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.740962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"783ab44f-8f99-41a8-8976-1f209dfa78e3","Type":"ContainerStarted","Data":"42ebd0184d953a80833cf3b4844a18b03e9c8b9035f897a4855f8171e3468533"} Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.756057 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.360675195 podStartE2EDuration="7.756032444s" podCreationTimestamp="2025-12-06 07:29:44 +0000 UTC" firstStartedPulling="2025-12-06 07:29:45.984292516 +0000 UTC m=+1948.385681376" lastFinishedPulling="2025-12-06 07:29:50.379649755 +0000 UTC m=+1952.781038625" observedRunningTime="2025-12-06 07:29:51.747314233 +0000 UTC m=+1954.148703113" watchObservedRunningTime="2025-12-06 07:29:51.756032444 +0000 UTC m=+1954.157421314" Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.769222 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.440804403 podStartE2EDuration="7.769201574s" podCreationTimestamp="2025-12-06 07:29:44 +0000 UTC" firstStartedPulling="2025-12-06 07:29:46.045425069 +0000 UTC m=+1948.446813939" lastFinishedPulling="2025-12-06 07:29:50.37382224 +0000 UTC m=+1952.775211110" observedRunningTime="2025-12-06 07:29:51.765921237 +0000 UTC m=+1954.167310117" watchObservedRunningTime="2025-12-06 07:29:51.769201574 +0000 UTC m=+1954.170590444" Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.794184 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.038425948 podStartE2EDuration="7.794157977s" podCreationTimestamp="2025-12-06 07:29:44 +0000 UTC" firstStartedPulling="2025-12-06 07:29:45.615381719 +0000 UTC m=+1948.016770589" lastFinishedPulling="2025-12-06 07:29:50.371113748 +0000 UTC m=+1952.772502618" observedRunningTime="2025-12-06 07:29:51.782741574 +0000 UTC m=+1954.184130444" watchObservedRunningTime="2025-12-06 07:29:51.794157977 +0000 UTC m=+1954.195546847" Dec 06 07:29:51 crc kubenswrapper[4895]: I1206 07:29:51.813826 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.990451083 podStartE2EDuration="7.813807208s" podCreationTimestamp="2025-12-06 07:29:44 +0000 UTC" firstStartedPulling="2025-12-06 07:29:45.546661004 +0000 UTC m=+1947.948049874" lastFinishedPulling="2025-12-06 07:29:50.370017119 +0000 UTC m=+1952.771405999" observedRunningTime="2025-12-06 07:29:51.803432253 +0000 UTC m=+1954.204821123" watchObservedRunningTime="2025-12-06 07:29:51.813807208 +0000 UTC m=+1954.215196078" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.755004 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.759824 4895 generic.go:334] "Generic (PLEG): container finished" podID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerID="d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39" exitCode=0 Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.759854 4895 generic.go:334] "Generic (PLEG): container finished" podID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerID="cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2" exitCode=143 Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.760507 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.760578 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f14901e-3580-4f9b-9569-b69f6c04c98f","Type":"ContainerDied","Data":"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39"} Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.760617 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f14901e-3580-4f9b-9569-b69f6c04c98f","Type":"ContainerDied","Data":"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2"} Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.760630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f14901e-3580-4f9b-9569-b69f6c04c98f","Type":"ContainerDied","Data":"c5c80d7e9381e3c918b50e2c4d15e5ef401fba586703ffec75ce51711d8dd610"} Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.760648 4895 scope.go:117] "RemoveContainer" containerID="d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.791144 4895 scope.go:117] "RemoveContainer" containerID="cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.810138 4895 scope.go:117] "RemoveContainer" containerID="d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39" Dec 06 07:29:52 crc kubenswrapper[4895]: E1206 07:29:52.810584 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39\": container with ID starting with d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39 not found: ID does not exist" containerID="d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.810634 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39"} err="failed to get container status \"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39\": rpc error: code = NotFound desc = could not find container \"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39\": container with ID starting with d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39 not found: ID does not exist" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.810665 4895 scope.go:117] "RemoveContainer" containerID="cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2" Dec 06 07:29:52 crc kubenswrapper[4895]: E1206 07:29:52.810935 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2\": container with ID starting with cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2 not found: ID does not exist" containerID="cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.810957 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2"} err="failed to get container status \"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2\": rpc error: code = NotFound desc = could not find container \"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2\": container with ID starting with cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2 not found: ID does not exist" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.810971 4895 scope.go:117] "RemoveContainer" containerID="d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.811148 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39"} err="failed to get container status \"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39\": rpc error: code = NotFound desc = could not find container \"d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39\": container with ID starting with d9b0508dd2aa3eb7b4c79aaf878b6916a21a9cc549f3f4eefb622a98ac43ad39 not found: ID does not exist" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.811167 4895 scope.go:117] "RemoveContainer" containerID="cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.811318 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2"} err="failed to get container status \"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2\": rpc error: code = NotFound desc = could not find container \"cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2\": container with ID starting with cd2f64ea2b2d6193667f684352d21c61e1a04bc9ca60adef70863d2c3f0edde2 not found: ID does not exist" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.916841 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-combined-ca-bundle\") pod \"1f14901e-3580-4f9b-9569-b69f6c04c98f\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.916899 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14901e-3580-4f9b-9569-b69f6c04c98f-logs\") pod \"1f14901e-3580-4f9b-9569-b69f6c04c98f\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.916978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmv5l\" (UniqueName: \"kubernetes.io/projected/1f14901e-3580-4f9b-9569-b69f6c04c98f-kube-api-access-dmv5l\") pod \"1f14901e-3580-4f9b-9569-b69f6c04c98f\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.917126 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-config-data\") pod \"1f14901e-3580-4f9b-9569-b69f6c04c98f\" (UID: \"1f14901e-3580-4f9b-9569-b69f6c04c98f\") " Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.917262 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14901e-3580-4f9b-9569-b69f6c04c98f-logs" (OuterVolumeSpecName: "logs") pod "1f14901e-3580-4f9b-9569-b69f6c04c98f" (UID: "1f14901e-3580-4f9b-9569-b69f6c04c98f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.917978 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14901e-3580-4f9b-9569-b69f6c04c98f-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.923529 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f14901e-3580-4f9b-9569-b69f6c04c98f-kube-api-access-dmv5l" (OuterVolumeSpecName: "kube-api-access-dmv5l") pod "1f14901e-3580-4f9b-9569-b69f6c04c98f" (UID: "1f14901e-3580-4f9b-9569-b69f6c04c98f"). InnerVolumeSpecName "kube-api-access-dmv5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.946147 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f14901e-3580-4f9b-9569-b69f6c04c98f" (UID: "1f14901e-3580-4f9b-9569-b69f6c04c98f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:52 crc kubenswrapper[4895]: I1206 07:29:52.956007 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-config-data" (OuterVolumeSpecName: "config-data") pod "1f14901e-3580-4f9b-9569-b69f6c04c98f" (UID: "1f14901e-3580-4f9b-9569-b69f6c04c98f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.020502 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.020543 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f14901e-3580-4f9b-9569-b69f6c04c98f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.020558 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmv5l\" (UniqueName: \"kubernetes.io/projected/1f14901e-3580-4f9b-9569-b69f6c04c98f-kube-api-access-dmv5l\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.112989 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.121334 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.135411 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:53 crc kubenswrapper[4895]: E1206 07:29:53.135824 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-metadata" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.135843 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-metadata" Dec 06 07:29:53 crc kubenswrapper[4895]: E1206 07:29:53.135858 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-log" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.135864 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-log" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.136035 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-metadata" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.136056 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" containerName="nova-metadata-log" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.137084 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.140171 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.141638 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.162140 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.326679 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e752db2a-f112-4a90-8d62-aadc76d29d68-logs\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.326824 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn5m\" (UniqueName: \"kubernetes.io/projected/e752db2a-f112-4a90-8d62-aadc76d29d68-kube-api-access-pfn5m\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.326938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.326995 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.327092 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-config-data\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.429396 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.429494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.429555 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-config-data\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.429683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e752db2a-f112-4a90-8d62-aadc76d29d68-logs\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.429740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn5m\" (UniqueName: \"kubernetes.io/projected/e752db2a-f112-4a90-8d62-aadc76d29d68-kube-api-access-pfn5m\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.430641 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e752db2a-f112-4a90-8d62-aadc76d29d68-logs\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.433549 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.434576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-config-data\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.441098 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.466077 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn5m\" (UniqueName: \"kubernetes.io/projected/e752db2a-f112-4a90-8d62-aadc76d29d68-kube-api-access-pfn5m\") pod \"nova-metadata-0\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.472423 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:53 crc kubenswrapper[4895]: I1206 07:29:53.987787 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.069984 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f14901e-3580-4f9b-9569-b69f6c04c98f" path="/var/lib/kubelet/pods/1f14901e-3580-4f9b-9569-b69f6c04c98f/volumes" Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.780219 4895 generic.go:334] "Generic (PLEG): container finished" podID="9351ace1-bec9-4251-866f-72d283f59ec3" containerID="6d8f3f62634430e83f765a55c482bfd6b56e3df61d8617990a8f46860d0e2b70" exitCode=0 Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.780563 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-twdnd" event={"ID":"9351ace1-bec9-4251-866f-72d283f59ec3","Type":"ContainerDied","Data":"6d8f3f62634430e83f765a55c482bfd6b56e3df61d8617990a8f46860d0e2b70"} Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.783040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e752db2a-f112-4a90-8d62-aadc76d29d68","Type":"ContainerStarted","Data":"9db93c1e996e3ccb6b5059dbab5a7012a493a6270eba5dec25039d70251334b2"} Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.783083 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e752db2a-f112-4a90-8d62-aadc76d29d68","Type":"ContainerStarted","Data":"5eadfe2764d050fe0186bb34d4cc39ceb232a5ce908d1aa67d320f256d70a84d"} Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.783093 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e752db2a-f112-4a90-8d62-aadc76d29d68","Type":"ContainerStarted","Data":"b20d06cbd487f391e18f373461434b6a7c5c8276e18e77bc59370ff488af85be"} Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.923634 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.923710 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.966777 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.999048 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 07:29:54 crc kubenswrapper[4895]: I1206 07:29:54.999098 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.292115 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.366756 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.459166 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-cvzmv"] Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.459397 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" podUID="61880001-a6c9-4c2f-80ea-27a053575307" containerName="dnsmasq-dns" containerID="cri-o://a8d1e51b3477a3661f3afeff16a4853e93283a38fdadab6a482c5fe596a29048" gracePeriod=10 Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.825873 4895 generic.go:334] "Generic (PLEG): container finished" podID="61880001-a6c9-4c2f-80ea-27a053575307" containerID="a8d1e51b3477a3661f3afeff16a4853e93283a38fdadab6a482c5fe596a29048" exitCode=0 Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.830119 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" event={"ID":"61880001-a6c9-4c2f-80ea-27a053575307","Type":"ContainerDied","Data":"a8d1e51b3477a3661f3afeff16a4853e93283a38fdadab6a482c5fe596a29048"} Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.888666 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.888643425 podStartE2EDuration="2.888643425s" podCreationTimestamp="2025-12-06 07:29:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:29:55.85567188 +0000 UTC m=+1958.257060760" watchObservedRunningTime="2025-12-06 07:29:55.888643425 +0000 UTC m=+1958.290032295" Dec 06 07:29:55 crc kubenswrapper[4895]: I1206 07:29:55.905172 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.072560 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.082897 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.083188 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.208194 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-svc\") pod \"61880001-a6c9-4c2f-80ea-27a053575307\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.208303 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-nb\") pod \"61880001-a6c9-4c2f-80ea-27a053575307\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.208346 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-sb\") pod \"61880001-a6c9-4c2f-80ea-27a053575307\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.208437 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-config\") pod \"61880001-a6c9-4c2f-80ea-27a053575307\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.208541 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrzfr\" (UniqueName: \"kubernetes.io/projected/61880001-a6c9-4c2f-80ea-27a053575307-kube-api-access-xrzfr\") pod \"61880001-a6c9-4c2f-80ea-27a053575307\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.208589 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-swift-storage-0\") pod \"61880001-a6c9-4c2f-80ea-27a053575307\" (UID: \"61880001-a6c9-4c2f-80ea-27a053575307\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.223619 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61880001-a6c9-4c2f-80ea-27a053575307-kube-api-access-xrzfr" (OuterVolumeSpecName: "kube-api-access-xrzfr") pod "61880001-a6c9-4c2f-80ea-27a053575307" (UID: "61880001-a6c9-4c2f-80ea-27a053575307"). InnerVolumeSpecName "kube-api-access-xrzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.270231 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61880001-a6c9-4c2f-80ea-27a053575307" (UID: "61880001-a6c9-4c2f-80ea-27a053575307"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.274777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61880001-a6c9-4c2f-80ea-27a053575307" (UID: "61880001-a6c9-4c2f-80ea-27a053575307"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.278217 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61880001-a6c9-4c2f-80ea-27a053575307" (UID: "61880001-a6c9-4c2f-80ea-27a053575307"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.288450 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-config" (OuterVolumeSpecName: "config") pod "61880001-a6c9-4c2f-80ea-27a053575307" (UID: "61880001-a6c9-4c2f-80ea-27a053575307"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.291046 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "61880001-a6c9-4c2f-80ea-27a053575307" (UID: "61880001-a6c9-4c2f-80ea-27a053575307"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.311234 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.311276 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrzfr\" (UniqueName: \"kubernetes.io/projected/61880001-a6c9-4c2f-80ea-27a053575307-kube-api-access-xrzfr\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.311288 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.311296 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.311305 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.311314 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61880001-a6c9-4c2f-80ea-27a053575307-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.392592 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.514387 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dsvv\" (UniqueName: \"kubernetes.io/projected/9351ace1-bec9-4251-866f-72d283f59ec3-kube-api-access-7dsvv\") pod \"9351ace1-bec9-4251-866f-72d283f59ec3\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.514603 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-scripts\") pod \"9351ace1-bec9-4251-866f-72d283f59ec3\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.514703 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-combined-ca-bundle\") pod \"9351ace1-bec9-4251-866f-72d283f59ec3\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.514828 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-config-data\") pod \"9351ace1-bec9-4251-866f-72d283f59ec3\" (UID: \"9351ace1-bec9-4251-866f-72d283f59ec3\") " Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.518587 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-scripts" (OuterVolumeSpecName: "scripts") pod "9351ace1-bec9-4251-866f-72d283f59ec3" (UID: "9351ace1-bec9-4251-866f-72d283f59ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.519285 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9351ace1-bec9-4251-866f-72d283f59ec3-kube-api-access-7dsvv" (OuterVolumeSpecName: "kube-api-access-7dsvv") pod "9351ace1-bec9-4251-866f-72d283f59ec3" (UID: "9351ace1-bec9-4251-866f-72d283f59ec3"). InnerVolumeSpecName "kube-api-access-7dsvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.552660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-config-data" (OuterVolumeSpecName: "config-data") pod "9351ace1-bec9-4251-866f-72d283f59ec3" (UID: "9351ace1-bec9-4251-866f-72d283f59ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.553766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9351ace1-bec9-4251-866f-72d283f59ec3" (UID: "9351ace1-bec9-4251-866f-72d283f59ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.616757 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.616792 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dsvv\" (UniqueName: \"kubernetes.io/projected/9351ace1-bec9-4251-866f-72d283f59ec3-kube-api-access-7dsvv\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.616803 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.616811 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9351ace1-bec9-4251-866f-72d283f59ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.857613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" event={"ID":"61880001-a6c9-4c2f-80ea-27a053575307","Type":"ContainerDied","Data":"742f687db5ce3566931ad0dff7b3cc18427486f11b4677c107525c91c5276328"} Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.857675 4895 scope.go:117] "RemoveContainer" containerID="a8d1e51b3477a3661f3afeff16a4853e93283a38fdadab6a482c5fe596a29048" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.857828 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-cvzmv" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.865287 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-twdnd" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.872762 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-twdnd" event={"ID":"9351ace1-bec9-4251-866f-72d283f59ec3","Type":"ContainerDied","Data":"aa547df8a4ea434de24ff27a76a6faba7539ce92c42bd593c867f77d5c633811"} Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.872822 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa547df8a4ea434de24ff27a76a6faba7539ce92c42bd593c867f77d5c633811" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.903678 4895 scope.go:117] "RemoveContainer" containerID="1ac08b5b35f146391dc5d3b9cb08957511a73032675c8f9626fcd03d7eb36611" Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.929619 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-cvzmv"] Dec 06 07:29:56 crc kubenswrapper[4895]: I1206 07:29:56.938264 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-cvzmv"] Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.006792 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.007061 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-log" containerID="cri-o://42ebd0184d953a80833cf3b4844a18b03e9c8b9035f897a4855f8171e3468533" gracePeriod=30 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.007133 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-api" containerID="cri-o://613a73c4e6e23364cc9af04ec4bd246e0f2658fbe2ac03e2783159ec09c7e85e" gracePeriod=30 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.019215 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.055760 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.058151 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-log" containerID="cri-o://5eadfe2764d050fe0186bb34d4cc39ceb232a5ce908d1aa67d320f256d70a84d" gracePeriod=30 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.058223 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-metadata" containerID="cri-o://9db93c1e996e3ccb6b5059dbab5a7012a493a6270eba5dec25039d70251334b2" gracePeriod=30 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.885796 4895 generic.go:334] "Generic (PLEG): container finished" podID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerID="9db93c1e996e3ccb6b5059dbab5a7012a493a6270eba5dec25039d70251334b2" exitCode=0 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.886144 4895 generic.go:334] "Generic (PLEG): container finished" podID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerID="5eadfe2764d050fe0186bb34d4cc39ceb232a5ce908d1aa67d320f256d70a84d" exitCode=143 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.886219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e752db2a-f112-4a90-8d62-aadc76d29d68","Type":"ContainerDied","Data":"9db93c1e996e3ccb6b5059dbab5a7012a493a6270eba5dec25039d70251334b2"} Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.886251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e752db2a-f112-4a90-8d62-aadc76d29d68","Type":"ContainerDied","Data":"5eadfe2764d050fe0186bb34d4cc39ceb232a5ce908d1aa67d320f256d70a84d"} Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.886283 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e752db2a-f112-4a90-8d62-aadc76d29d68","Type":"ContainerDied","Data":"b20d06cbd487f391e18f373461434b6a7c5c8276e18e77bc59370ff488af85be"} Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.886295 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b20d06cbd487f391e18f373461434b6a7c5c8276e18e77bc59370ff488af85be" Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.888779 4895 generic.go:334] "Generic (PLEG): container finished" podID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerID="42ebd0184d953a80833cf3b4844a18b03e9c8b9035f897a4855f8171e3468533" exitCode=143 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.888859 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"783ab44f-8f99-41a8-8976-1f209dfa78e3","Type":"ContainerDied","Data":"42ebd0184d953a80833cf3b4844a18b03e9c8b9035f897a4855f8171e3468533"} Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.890422 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" containerName="nova-scheduler-scheduler" containerID="cri-o://4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" gracePeriod=30 Dec 06 07:29:57 crc kubenswrapper[4895]: I1206 07:29:57.937378 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.051533 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-nova-metadata-tls-certs\") pod \"e752db2a-f112-4a90-8d62-aadc76d29d68\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.051721 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfn5m\" (UniqueName: \"kubernetes.io/projected/e752db2a-f112-4a90-8d62-aadc76d29d68-kube-api-access-pfn5m\") pod \"e752db2a-f112-4a90-8d62-aadc76d29d68\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.051767 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e752db2a-f112-4a90-8d62-aadc76d29d68-logs\") pod \"e752db2a-f112-4a90-8d62-aadc76d29d68\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.051879 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-combined-ca-bundle\") pod \"e752db2a-f112-4a90-8d62-aadc76d29d68\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.051970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-config-data\") pod \"e752db2a-f112-4a90-8d62-aadc76d29d68\" (UID: \"e752db2a-f112-4a90-8d62-aadc76d29d68\") " Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.052348 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e752db2a-f112-4a90-8d62-aadc76d29d68-logs" (OuterVolumeSpecName: "logs") pod "e752db2a-f112-4a90-8d62-aadc76d29d68" (UID: "e752db2a-f112-4a90-8d62-aadc76d29d68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.052678 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e752db2a-f112-4a90-8d62-aadc76d29d68-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.063740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e752db2a-f112-4a90-8d62-aadc76d29d68-kube-api-access-pfn5m" (OuterVolumeSpecName: "kube-api-access-pfn5m") pod "e752db2a-f112-4a90-8d62-aadc76d29d68" (UID: "e752db2a-f112-4a90-8d62-aadc76d29d68"). InnerVolumeSpecName "kube-api-access-pfn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.097692 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61880001-a6c9-4c2f-80ea-27a053575307" path="/var/lib/kubelet/pods/61880001-a6c9-4c2f-80ea-27a053575307/volumes" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.154181 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfn5m\" (UniqueName: \"kubernetes.io/projected/e752db2a-f112-4a90-8d62-aadc76d29d68-kube-api-access-pfn5m\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.168238 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e752db2a-f112-4a90-8d62-aadc76d29d68" (UID: "e752db2a-f112-4a90-8d62-aadc76d29d68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.173615 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-config-data" (OuterVolumeSpecName: "config-data") pod "e752db2a-f112-4a90-8d62-aadc76d29d68" (UID: "e752db2a-f112-4a90-8d62-aadc76d29d68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.202302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e752db2a-f112-4a90-8d62-aadc76d29d68" (UID: "e752db2a-f112-4a90-8d62-aadc76d29d68"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.256505 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.256534 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.256542 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e752db2a-f112-4a90-8d62-aadc76d29d68-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.898675 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.936812 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.948332 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.957721 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:58 crc kubenswrapper[4895]: E1206 07:29:58.958423 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61880001-a6c9-4c2f-80ea-27a053575307" containerName="init" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.958583 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="61880001-a6c9-4c2f-80ea-27a053575307" containerName="init" Dec 06 07:29:58 crc kubenswrapper[4895]: E1206 07:29:58.958667 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9351ace1-bec9-4251-866f-72d283f59ec3" containerName="nova-manage" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.958725 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9351ace1-bec9-4251-866f-72d283f59ec3" containerName="nova-manage" Dec 06 07:29:58 crc kubenswrapper[4895]: E1206 07:29:58.958792 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-metadata" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.958869 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-metadata" Dec 06 07:29:58 crc kubenswrapper[4895]: E1206 07:29:58.958963 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61880001-a6c9-4c2f-80ea-27a053575307" containerName="dnsmasq-dns" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.959054 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="61880001-a6c9-4c2f-80ea-27a053575307" containerName="dnsmasq-dns" Dec 06 07:29:58 crc kubenswrapper[4895]: E1206 07:29:58.959114 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-log" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.959171 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-log" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.959422 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9351ace1-bec9-4251-866f-72d283f59ec3" containerName="nova-manage" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.959518 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-metadata" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.959602 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="61880001-a6c9-4c2f-80ea-27a053575307" containerName="dnsmasq-dns" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.959690 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" containerName="nova-metadata-log" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.960972 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.963563 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.963896 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 07:29:58 crc kubenswrapper[4895]: I1206 07:29:58.976254 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.069963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.070051 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91395243-4043-49b6-869c-05d21691a2f3-logs\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.070087 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.070114 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4vb5\" (UniqueName: \"kubernetes.io/projected/91395243-4043-49b6-869c-05d21691a2f3-kube-api-access-d4vb5\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.070153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-config-data\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.171750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.171970 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91395243-4043-49b6-869c-05d21691a2f3-logs\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.172021 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.172044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4vb5\" (UniqueName: \"kubernetes.io/projected/91395243-4043-49b6-869c-05d21691a2f3-kube-api-access-d4vb5\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.172101 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-config-data\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.174085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91395243-4043-49b6-869c-05d21691a2f3-logs\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.178669 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.179291 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.184443 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-config-data\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.190548 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4vb5\" (UniqueName: \"kubernetes.io/projected/91395243-4043-49b6-869c-05d21691a2f3-kube-api-access-d4vb5\") pod \"nova-metadata-0\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.298280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.797258 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:29:59 crc kubenswrapper[4895]: I1206 07:29:59.920014 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91395243-4043-49b6-869c-05d21691a2f3","Type":"ContainerStarted","Data":"5ad5014ecc573a53d8df125c530d4a7edf5640de9db614469a08454fbd38349e"} Dec 06 07:29:59 crc kubenswrapper[4895]: E1206 07:29:59.924756 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:29:59 crc kubenswrapper[4895]: E1206 07:29:59.926892 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:29:59 crc kubenswrapper[4895]: E1206 07:29:59.928124 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:29:59 crc kubenswrapper[4895]: E1206 07:29:59.928189 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" containerName="nova-scheduler-scheduler" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.061936 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e752db2a-f112-4a90-8d62-aadc76d29d68" path="/var/lib/kubelet/pods/e752db2a-f112-4a90-8d62-aadc76d29d68/volumes" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.137602 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt"] Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.140635 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.144078 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.147180 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.168047 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt"] Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.292632 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7434d222-3dd8-455f-aff3-69f452f63fee-secret-volume\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.292827 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7434d222-3dd8-455f-aff3-69f452f63fee-config-volume\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.292915 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87lv2\" (UniqueName: \"kubernetes.io/projected/7434d222-3dd8-455f-aff3-69f452f63fee-kube-api-access-87lv2\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.395372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7434d222-3dd8-455f-aff3-69f452f63fee-secret-volume\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.395544 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7434d222-3dd8-455f-aff3-69f452f63fee-config-volume\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.395626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87lv2\" (UniqueName: \"kubernetes.io/projected/7434d222-3dd8-455f-aff3-69f452f63fee-kube-api-access-87lv2\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.396667 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7434d222-3dd8-455f-aff3-69f452f63fee-config-volume\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.400611 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7434d222-3dd8-455f-aff3-69f452f63fee-secret-volume\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.417685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87lv2\" (UniqueName: \"kubernetes.io/projected/7434d222-3dd8-455f-aff3-69f452f63fee-kube-api-access-87lv2\") pod \"collect-profiles-29416770-nfczt\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.462932 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.933047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91395243-4043-49b6-869c-05d21691a2f3","Type":"ContainerStarted","Data":"9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8"} Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.933409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91395243-4043-49b6-869c-05d21691a2f3","Type":"ContainerStarted","Data":"6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c"} Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.963340 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.963315333 podStartE2EDuration="2.963315333s" podCreationTimestamp="2025-12-06 07:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:00.961029952 +0000 UTC m=+1963.362418832" watchObservedRunningTime="2025-12-06 07:30:00.963315333 +0000 UTC m=+1963.364704203" Dec 06 07:30:00 crc kubenswrapper[4895]: I1206 07:30:00.978835 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt"] Dec 06 07:30:01 crc kubenswrapper[4895]: E1206 07:30:01.397980 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7434d222_3dd8_455f_aff3_69f452f63fee.slice/crio-3bec6b28dcf046d898dfded872eb984e8ca3ccfa7f5bd9211cc395c6d3497aaf.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:30:01 crc kubenswrapper[4895]: I1206 07:30:01.945450 4895 generic.go:334] "Generic (PLEG): container finished" podID="7434d222-3dd8-455f-aff3-69f452f63fee" containerID="3bec6b28dcf046d898dfded872eb984e8ca3ccfa7f5bd9211cc395c6d3497aaf" exitCode=0 Dec 06 07:30:01 crc kubenswrapper[4895]: I1206 07:30:01.945567 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" event={"ID":"7434d222-3dd8-455f-aff3-69f452f63fee","Type":"ContainerDied","Data":"3bec6b28dcf046d898dfded872eb984e8ca3ccfa7f5bd9211cc395c6d3497aaf"} Dec 06 07:30:01 crc kubenswrapper[4895]: I1206 07:30:01.946057 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" event={"ID":"7434d222-3dd8-455f-aff3-69f452f63fee","Type":"ContainerStarted","Data":"f95a983dea41f14be3a1e92d85ef1bd1c4b6628e4d9194dbb1a949d9939b84d9"} Dec 06 07:30:02 crc kubenswrapper[4895]: I1206 07:30:02.011581 4895 scope.go:117] "RemoveContainer" containerID="4b9c441e33f352f90714fb491f5e1a626a0ddfe9b8f6d5fc8e38bf0b4cad0cbb" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.306885 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.462719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7434d222-3dd8-455f-aff3-69f452f63fee-secret-volume\") pod \"7434d222-3dd8-455f-aff3-69f452f63fee\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.462846 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7434d222-3dd8-455f-aff3-69f452f63fee-config-volume\") pod \"7434d222-3dd8-455f-aff3-69f452f63fee\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.462973 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87lv2\" (UniqueName: \"kubernetes.io/projected/7434d222-3dd8-455f-aff3-69f452f63fee-kube-api-access-87lv2\") pod \"7434d222-3dd8-455f-aff3-69f452f63fee\" (UID: \"7434d222-3dd8-455f-aff3-69f452f63fee\") " Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.464488 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7434d222-3dd8-455f-aff3-69f452f63fee-config-volume" (OuterVolumeSpecName: "config-volume") pod "7434d222-3dd8-455f-aff3-69f452f63fee" (UID: "7434d222-3dd8-455f-aff3-69f452f63fee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.471697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7434d222-3dd8-455f-aff3-69f452f63fee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7434d222-3dd8-455f-aff3-69f452f63fee" (UID: "7434d222-3dd8-455f-aff3-69f452f63fee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.487113 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7434d222-3dd8-455f-aff3-69f452f63fee-kube-api-access-87lv2" (OuterVolumeSpecName: "kube-api-access-87lv2") pod "7434d222-3dd8-455f-aff3-69f452f63fee" (UID: "7434d222-3dd8-455f-aff3-69f452f63fee"). InnerVolumeSpecName "kube-api-access-87lv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.487186 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7kbcl"] Dec 06 07:30:03 crc kubenswrapper[4895]: E1206 07:30:03.487612 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7434d222-3dd8-455f-aff3-69f452f63fee" containerName="collect-profiles" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.487632 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7434d222-3dd8-455f-aff3-69f452f63fee" containerName="collect-profiles" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.487873 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7434d222-3dd8-455f-aff3-69f452f63fee" containerName="collect-profiles" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.489315 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.505882 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7kbcl"] Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.565080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-utilities\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.565175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvb9r\" (UniqueName: \"kubernetes.io/projected/b8355c19-10c2-4971-96ce-52651efbc5ea-kube-api-access-jvb9r\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.565236 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-catalog-content\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.565409 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7434d222-3dd8-455f-aff3-69f452f63fee-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.565423 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7434d222-3dd8-455f-aff3-69f452f63fee-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.565433 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87lv2\" (UniqueName: \"kubernetes.io/projected/7434d222-3dd8-455f-aff3-69f452f63fee-kube-api-access-87lv2\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.666827 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-utilities\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.666930 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvb9r\" (UniqueName: \"kubernetes.io/projected/b8355c19-10c2-4971-96ce-52651efbc5ea-kube-api-access-jvb9r\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.666979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-catalog-content\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.667460 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-utilities\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.667608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-catalog-content\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.685854 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvb9r\" (UniqueName: \"kubernetes.io/projected/b8355c19-10c2-4971-96ce-52651efbc5ea-kube-api-access-jvb9r\") pod \"certified-operators-7kbcl\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.881181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.974074 4895 generic.go:334] "Generic (PLEG): container finished" podID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerID="613a73c4e6e23364cc9af04ec4bd246e0f2658fbe2ac03e2783159ec09c7e85e" exitCode=0 Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.974166 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"783ab44f-8f99-41a8-8976-1f209dfa78e3","Type":"ContainerDied","Data":"613a73c4e6e23364cc9af04ec4bd246e0f2658fbe2ac03e2783159ec09c7e85e"} Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.990681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" event={"ID":"7434d222-3dd8-455f-aff3-69f452f63fee","Type":"ContainerDied","Data":"f95a983dea41f14be3a1e92d85ef1bd1c4b6628e4d9194dbb1a949d9939b84d9"} Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.990735 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95a983dea41f14be3a1e92d85ef1bd1c4b6628e4d9194dbb1a949d9939b84d9" Dec 06 07:30:03 crc kubenswrapper[4895]: I1206 07:30:03.990802 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt" Dec 06 07:30:04 crc kubenswrapper[4895]: I1206 07:30:04.298648 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 07:30:04 crc kubenswrapper[4895]: I1206 07:30:04.299956 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 07:30:04 crc kubenswrapper[4895]: I1206 07:30:04.508452 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7kbcl"] Dec 06 07:30:04 crc kubenswrapper[4895]: W1206 07:30:04.511700 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8355c19_10c2_4971_96ce_52651efbc5ea.slice/crio-0ab46d8074a137669b360f3d74f98b81163b0a628ff25670126fe3966eacd136 WatchSource:0}: Error finding container 0ab46d8074a137669b360f3d74f98b81163b0a628ff25670126fe3966eacd136: Status 404 returned error can't find the container with id 0ab46d8074a137669b360f3d74f98b81163b0a628ff25670126fe3966eacd136 Dec 06 07:30:04 crc kubenswrapper[4895]: E1206 07:30:04.924337 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56 is running failed: container process not found" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:30:04 crc kubenswrapper[4895]: E1206 07:30:04.925140 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56 is running failed: container process not found" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:30:04 crc kubenswrapper[4895]: E1206 07:30:04.925659 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56 is running failed: container process not found" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:30:04 crc kubenswrapper[4895]: E1206 07:30:04.925715 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" containerName="nova-scheduler-scheduler" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.003271 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"783ab44f-8f99-41a8-8976-1f209dfa78e3","Type":"ContainerDied","Data":"c0c7f5e364808807de7a0112bd5c3f6f76bffa5d1a2b6ebe3954b012aa1ebbb1"} Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.003316 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c7f5e364808807de7a0112bd5c3f6f76bffa5d1a2b6ebe3954b012aa1ebbb1" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.005382 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerID="1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59" exitCode=0 Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.005457 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerDied","Data":"1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59"} Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.005509 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerStarted","Data":"0ab46d8074a137669b360f3d74f98b81163b0a628ff25670126fe3966eacd136"} Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.007965 4895 generic.go:334] "Generic (PLEG): container finished" podID="79ad1ed9-7218-46e9-b070-063abf764a57" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" exitCode=0 Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.008005 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79ad1ed9-7218-46e9-b070-063abf764a57","Type":"ContainerDied","Data":"4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56"} Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.013098 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79ad1ed9-7218-46e9-b070-063abf764a57","Type":"ContainerDied","Data":"164b7f69e7984c42bb9ee744a0ba49fa2272a6fef7b25eb256b4370e8313453d"} Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.013138 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="164b7f69e7984c42bb9ee744a0ba49fa2272a6fef7b25eb256b4370e8313453d" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.031680 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.038077 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.096429 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-combined-ca-bundle\") pod \"79ad1ed9-7218-46e9-b070-063abf764a57\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.096635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-config-data\") pod \"79ad1ed9-7218-46e9-b070-063abf764a57\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.096728 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xklfv\" (UniqueName: \"kubernetes.io/projected/79ad1ed9-7218-46e9-b070-063abf764a57-kube-api-access-xklfv\") pod \"79ad1ed9-7218-46e9-b070-063abf764a57\" (UID: \"79ad1ed9-7218-46e9-b070-063abf764a57\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.107720 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ad1ed9-7218-46e9-b070-063abf764a57-kube-api-access-xklfv" (OuterVolumeSpecName: "kube-api-access-xklfv") pod "79ad1ed9-7218-46e9-b070-063abf764a57" (UID: "79ad1ed9-7218-46e9-b070-063abf764a57"). InnerVolumeSpecName "kube-api-access-xklfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.140919 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79ad1ed9-7218-46e9-b070-063abf764a57" (UID: "79ad1ed9-7218-46e9-b070-063abf764a57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.153602 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-config-data" (OuterVolumeSpecName: "config-data") pod "79ad1ed9-7218-46e9-b070-063abf764a57" (UID: "79ad1ed9-7218-46e9-b070-063abf764a57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.202443 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783ab44f-8f99-41a8-8976-1f209dfa78e3-logs\") pod \"783ab44f-8f99-41a8-8976-1f209dfa78e3\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.202662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8v98\" (UniqueName: \"kubernetes.io/projected/783ab44f-8f99-41a8-8976-1f209dfa78e3-kube-api-access-n8v98\") pod \"783ab44f-8f99-41a8-8976-1f209dfa78e3\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.202816 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data\") pod \"783ab44f-8f99-41a8-8976-1f209dfa78e3\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.202898 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-combined-ca-bundle\") pod \"783ab44f-8f99-41a8-8976-1f209dfa78e3\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.203021 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783ab44f-8f99-41a8-8976-1f209dfa78e3-logs" (OuterVolumeSpecName: "logs") pod "783ab44f-8f99-41a8-8976-1f209dfa78e3" (UID: "783ab44f-8f99-41a8-8976-1f209dfa78e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.203371 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.203393 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xklfv\" (UniqueName: \"kubernetes.io/projected/79ad1ed9-7218-46e9-b070-063abf764a57-kube-api-access-xklfv\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.203408 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad1ed9-7218-46e9-b070-063abf764a57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.203417 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783ab44f-8f99-41a8-8976-1f209dfa78e3-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.206254 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783ab44f-8f99-41a8-8976-1f209dfa78e3-kube-api-access-n8v98" (OuterVolumeSpecName: "kube-api-access-n8v98") pod "783ab44f-8f99-41a8-8976-1f209dfa78e3" (UID: "783ab44f-8f99-41a8-8976-1f209dfa78e3"). InnerVolumeSpecName "kube-api-access-n8v98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: E1206 07:30:05.228981 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data podName:783ab44f-8f99-41a8-8976-1f209dfa78e3 nodeName:}" failed. No retries permitted until 2025-12-06 07:30:05.728946195 +0000 UTC m=+1968.130335065 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data") pod "783ab44f-8f99-41a8-8976-1f209dfa78e3" (UID: "783ab44f-8f99-41a8-8976-1f209dfa78e3") : error deleting /var/lib/kubelet/pods/783ab44f-8f99-41a8-8976-1f209dfa78e3/volume-subpaths: remove /var/lib/kubelet/pods/783ab44f-8f99-41a8-8976-1f209dfa78e3/volume-subpaths: no such file or directory Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.232120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "783ab44f-8f99-41a8-8976-1f209dfa78e3" (UID: "783ab44f-8f99-41a8-8976-1f209dfa78e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.305290 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8v98\" (UniqueName: \"kubernetes.io/projected/783ab44f-8f99-41a8-8976-1f209dfa78e3-kube-api-access-n8v98\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.305649 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.817758 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data\") pod \"783ab44f-8f99-41a8-8976-1f209dfa78e3\" (UID: \"783ab44f-8f99-41a8-8976-1f209dfa78e3\") " Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.823568 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data" (OuterVolumeSpecName: "config-data") pod "783ab44f-8f99-41a8-8976-1f209dfa78e3" (UID: "783ab44f-8f99-41a8-8976-1f209dfa78e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:05 crc kubenswrapper[4895]: I1206 07:30:05.920763 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783ab44f-8f99-41a8-8976-1f209dfa78e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.019455 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerStarted","Data":"9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1"} Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.019514 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.019579 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.093900 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.114630 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.129673 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.158463 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.170374 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: E1206 07:30:06.170958 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" containerName="nova-scheduler-scheduler" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.170976 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" containerName="nova-scheduler-scheduler" Dec 06 07:30:06 crc kubenswrapper[4895]: E1206 07:30:06.171002 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-api" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.171009 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-api" Dec 06 07:30:06 crc kubenswrapper[4895]: E1206 07:30:06.171018 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-log" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.171024 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-log" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.171197 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-log" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.171209 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" containerName="nova-scheduler-scheduler" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.171225 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" containerName="nova-api-api" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.172505 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.176116 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.200869 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.209966 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.211888 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.214734 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.220520 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.333894 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jghm6\" (UniqueName: \"kubernetes.io/projected/e13d1a77-c207-4334-b32f-f2befaf2768c-kube-api-access-jghm6\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.333938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.333971 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-config-data\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.334146 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de49f788-1d43-44bd-9d35-eb22835ba7d8-logs\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.334204 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.334306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hjtq\" (UniqueName: \"kubernetes.io/projected/de49f788-1d43-44bd-9d35-eb22835ba7d8-kube-api-access-9hjtq\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.334344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-config-data\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436454 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jghm6\" (UniqueName: \"kubernetes.io/projected/e13d1a77-c207-4334-b32f-f2befaf2768c-kube-api-access-jghm6\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436522 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436550 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-config-data\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de49f788-1d43-44bd-9d35-eb22835ba7d8-logs\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436666 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436730 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hjtq\" (UniqueName: \"kubernetes.io/projected/de49f788-1d43-44bd-9d35-eb22835ba7d8-kube-api-access-9hjtq\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.436762 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-config-data\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.437370 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de49f788-1d43-44bd-9d35-eb22835ba7d8-logs\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.445404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-config-data\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.445765 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.450008 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.454612 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-config-data\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.457672 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jghm6\" (UniqueName: \"kubernetes.io/projected/e13d1a77-c207-4334-b32f-f2befaf2768c-kube-api-access-jghm6\") pod \"nova-scheduler-0\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.459139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hjtq\" (UniqueName: \"kubernetes.io/projected/de49f788-1d43-44bd-9d35-eb22835ba7d8-kube-api-access-9hjtq\") pod \"nova-api-0\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.507740 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.541688 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:06 crc kubenswrapper[4895]: I1206 07:30:06.978171 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:06 crc kubenswrapper[4895]: W1206 07:30:06.986367 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode13d1a77_c207_4334_b32f_f2befaf2768c.slice/crio-c1803ae66da608d78f727c0297429d1ab93d8efa9441a5cc12aa3003a2267fa4 WatchSource:0}: Error finding container c1803ae66da608d78f727c0297429d1ab93d8efa9441a5cc12aa3003a2267fa4: Status 404 returned error can't find the container with id c1803ae66da608d78f727c0297429d1ab93d8efa9441a5cc12aa3003a2267fa4 Dec 06 07:30:07 crc kubenswrapper[4895]: I1206 07:30:07.034414 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e13d1a77-c207-4334-b32f-f2befaf2768c","Type":"ContainerStarted","Data":"c1803ae66da608d78f727c0297429d1ab93d8efa9441a5cc12aa3003a2267fa4"} Dec 06 07:30:07 crc kubenswrapper[4895]: I1206 07:30:07.037403 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerID="9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1" exitCode=0 Dec 06 07:30:07 crc kubenswrapper[4895]: I1206 07:30:07.037508 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerDied","Data":"9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1"} Dec 06 07:30:07 crc kubenswrapper[4895]: I1206 07:30:07.042742 4895 generic.go:334] "Generic (PLEG): container finished" podID="faea84b4-6c7f-4d6c-b42e-14ba117920d1" containerID="140b2e29bd8d69af6c9e4cdff4cf64a89681dd2e9b425d6a583968de0bca4fd3" exitCode=0 Dec 06 07:30:07 crc kubenswrapper[4895]: I1206 07:30:07.042785 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hmw66" event={"ID":"faea84b4-6c7f-4d6c-b42e-14ba117920d1","Type":"ContainerDied","Data":"140b2e29bd8d69af6c9e4cdff4cf64a89681dd2e9b425d6a583968de0bca4fd3"} Dec 06 07:30:07 crc kubenswrapper[4895]: I1206 07:30:07.093742 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:07 crc kubenswrapper[4895]: W1206 07:30:07.094622 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde49f788_1d43_44bd_9d35_eb22835ba7d8.slice/crio-559604f53db4a601900527f788f6bdc5ed7ee3b4406886f0574844fef4b1e8ca WatchSource:0}: Error finding container 559604f53db4a601900527f788f6bdc5ed7ee3b4406886f0574844fef4b1e8ca: Status 404 returned error can't find the container with id 559604f53db4a601900527f788f6bdc5ed7ee3b4406886f0574844fef4b1e8ca Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.065887 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783ab44f-8f99-41a8-8976-1f209dfa78e3" path="/var/lib/kubelet/pods/783ab44f-8f99-41a8-8976-1f209dfa78e3/volumes" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.068240 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ad1ed9-7218-46e9-b070-063abf764a57" path="/var/lib/kubelet/pods/79ad1ed9-7218-46e9-b070-063abf764a57/volumes" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.069013 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de49f788-1d43-44bd-9d35-eb22835ba7d8","Type":"ContainerStarted","Data":"da5bb5deff031ab415f53674debfe8d63c42e0184d2b305a5e7857690c3474d1"} Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.069148 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de49f788-1d43-44bd-9d35-eb22835ba7d8","Type":"ContainerStarted","Data":"cf7361d370ce300f85b0b5f90931411dc56f23e1bd6f8b013adabe1f264d5f8e"} Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.069225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de49f788-1d43-44bd-9d35-eb22835ba7d8","Type":"ContainerStarted","Data":"559604f53db4a601900527f788f6bdc5ed7ee3b4406886f0574844fef4b1e8ca"} Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.069757 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e13d1a77-c207-4334-b32f-f2befaf2768c","Type":"ContainerStarted","Data":"9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9"} Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.076084 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerStarted","Data":"b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d"} Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.193621 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.193604922 podStartE2EDuration="2.193604922s" podCreationTimestamp="2025-12-06 07:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:08.193072047 +0000 UTC m=+1970.594460927" watchObservedRunningTime="2025-12-06 07:30:08.193604922 +0000 UTC m=+1970.594993792" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.236356 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7kbcl" podStartSLOduration=2.447446698 podStartE2EDuration="5.236325996s" podCreationTimestamp="2025-12-06 07:30:03 +0000 UTC" firstStartedPulling="2025-12-06 07:30:05.009001645 +0000 UTC m=+1967.410390515" lastFinishedPulling="2025-12-06 07:30:07.797880943 +0000 UTC m=+1970.199269813" observedRunningTime="2025-12-06 07:30:08.224945604 +0000 UTC m=+1970.626334474" watchObservedRunningTime="2025-12-06 07:30:08.236325996 +0000 UTC m=+1970.637714866" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.264047 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.264023912 podStartE2EDuration="2.264023912s" podCreationTimestamp="2025-12-06 07:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:08.251512539 +0000 UTC m=+1970.652901409" watchObservedRunningTime="2025-12-06 07:30:08.264023912 +0000 UTC m=+1970.665412782" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.517134 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.530266 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.581549 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-scripts\") pod \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.581673 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-config-data\") pod \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.581890 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6hns\" (UniqueName: \"kubernetes.io/projected/faea84b4-6c7f-4d6c-b42e-14ba117920d1-kube-api-access-h6hns\") pod \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.581941 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-combined-ca-bundle\") pod \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\" (UID: \"faea84b4-6c7f-4d6c-b42e-14ba117920d1\") " Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.604662 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-scripts" (OuterVolumeSpecName: "scripts") pod "faea84b4-6c7f-4d6c-b42e-14ba117920d1" (UID: "faea84b4-6c7f-4d6c-b42e-14ba117920d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.604724 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faea84b4-6c7f-4d6c-b42e-14ba117920d1-kube-api-access-h6hns" (OuterVolumeSpecName: "kube-api-access-h6hns") pod "faea84b4-6c7f-4d6c-b42e-14ba117920d1" (UID: "faea84b4-6c7f-4d6c-b42e-14ba117920d1"). InnerVolumeSpecName "kube-api-access-h6hns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.628291 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faea84b4-6c7f-4d6c-b42e-14ba117920d1" (UID: "faea84b4-6c7f-4d6c-b42e-14ba117920d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.657112 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-config-data" (OuterVolumeSpecName: "config-data") pod "faea84b4-6c7f-4d6c-b42e-14ba117920d1" (UID: "faea84b4-6c7f-4d6c-b42e-14ba117920d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.683733 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.683769 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.683779 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6hns\" (UniqueName: \"kubernetes.io/projected/faea84b4-6c7f-4d6c-b42e-14ba117920d1-kube-api-access-h6hns\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:08 crc kubenswrapper[4895]: I1206 07:30:08.683790 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faea84b4-6c7f-4d6c-b42e-14ba117920d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.087126 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hmw66" event={"ID":"faea84b4-6c7f-4d6c-b42e-14ba117920d1","Type":"ContainerDied","Data":"14752abb2abcc42d0d08426ba29f44c4e3b0a6823809649a7a22045e673f86be"} Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.087211 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14752abb2abcc42d0d08426ba29f44c4e3b0a6823809649a7a22045e673f86be" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.087655 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hmw66" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.198334 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 07:30:09 crc kubenswrapper[4895]: E1206 07:30:09.199015 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faea84b4-6c7f-4d6c-b42e-14ba117920d1" containerName="nova-cell1-conductor-db-sync" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.199043 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="faea84b4-6c7f-4d6c-b42e-14ba117920d1" containerName="nova-cell1-conductor-db-sync" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.199407 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="faea84b4-6c7f-4d6c-b42e-14ba117920d1" containerName="nova-cell1-conductor-db-sync" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.200116 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.203676 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.234146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.296839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.296984 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwmc\" (UniqueName: \"kubernetes.io/projected/33f761e3-7f6a-4c1b-b41d-32a14558a756-kube-api-access-bqwmc\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.297138 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.298782 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.298822 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.398687 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwmc\" (UniqueName: \"kubernetes.io/projected/33f761e3-7f6a-4c1b-b41d-32a14558a756-kube-api-access-bqwmc\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.399134 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.399274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.407821 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.423194 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwmc\" (UniqueName: \"kubernetes.io/projected/33f761e3-7f6a-4c1b-b41d-32a14558a756-kube-api-access-bqwmc\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.425955 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:09 crc kubenswrapper[4895]: I1206 07:30:09.542385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:10 crc kubenswrapper[4895]: I1206 07:30:10.308694 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 07:30:10 crc kubenswrapper[4895]: I1206 07:30:10.308692 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 07:30:10 crc kubenswrapper[4895]: I1206 07:30:10.352297 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 07:30:11 crc kubenswrapper[4895]: I1206 07:30:11.119610 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33f761e3-7f6a-4c1b-b41d-32a14558a756","Type":"ContainerStarted","Data":"a64d778bdc8c44ff303328a5f385c0fbe2b30f14b75877fc19a0a95d9d843239"} Dec 06 07:30:11 crc kubenswrapper[4895]: I1206 07:30:11.508655 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 07:30:13 crc kubenswrapper[4895]: I1206 07:30:13.141328 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33f761e3-7f6a-4c1b-b41d-32a14558a756","Type":"ContainerStarted","Data":"ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f"} Dec 06 07:30:13 crc kubenswrapper[4895]: I1206 07:30:13.141706 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:13 crc kubenswrapper[4895]: I1206 07:30:13.180232 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.180205821 podStartE2EDuration="4.180205821s" podCreationTimestamp="2025-12-06 07:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:13.161451763 +0000 UTC m=+1975.562840653" watchObservedRunningTime="2025-12-06 07:30:13.180205821 +0000 UTC m=+1975.581594691" Dec 06 07:30:13 crc kubenswrapper[4895]: I1206 07:30:13.881593 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:13 crc kubenswrapper[4895]: I1206 07:30:13.881648 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:13 crc kubenswrapper[4895]: I1206 07:30:13.949443 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:14 crc kubenswrapper[4895]: I1206 07:30:14.202008 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:14 crc kubenswrapper[4895]: I1206 07:30:14.266556 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7kbcl"] Dec 06 07:30:14 crc kubenswrapper[4895]: I1206 07:30:14.946028 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:30:14 crc kubenswrapper[4895]: I1206 07:30:14.946452 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0be2f95c-5bc7-4080-8396-382e4d3bd7da" containerName="kube-state-metrics" containerID="cri-o://f7fc0eeeac61074a45e8a57b78362927f6a3286a4c8d6c19aa3d157afad047b4" gracePeriod=30 Dec 06 07:30:15 crc kubenswrapper[4895]: I1206 07:30:15.226268 4895 generic.go:334] "Generic (PLEG): container finished" podID="0be2f95c-5bc7-4080-8396-382e4d3bd7da" containerID="f7fc0eeeac61074a45e8a57b78362927f6a3286a4c8d6c19aa3d157afad047b4" exitCode=2 Dec 06 07:30:15 crc kubenswrapper[4895]: I1206 07:30:15.226866 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be2f95c-5bc7-4080-8396-382e4d3bd7da","Type":"ContainerDied","Data":"f7fc0eeeac61074a45e8a57b78362927f6a3286a4c8d6c19aa3d157afad047b4"} Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:15.523302 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:15.576517 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q82w\" (UniqueName: \"kubernetes.io/projected/0be2f95c-5bc7-4080-8396-382e4d3bd7da-kube-api-access-2q82w\") pod \"0be2f95c-5bc7-4080-8396-382e4d3bd7da\" (UID: \"0be2f95c-5bc7-4080-8396-382e4d3bd7da\") " Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:15.586691 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be2f95c-5bc7-4080-8396-382e4d3bd7da-kube-api-access-2q82w" (OuterVolumeSpecName: "kube-api-access-2q82w") pod "0be2f95c-5bc7-4080-8396-382e4d3bd7da" (UID: "0be2f95c-5bc7-4080-8396-382e4d3bd7da"). InnerVolumeSpecName "kube-api-access-2q82w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:15.683677 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q82w\" (UniqueName: \"kubernetes.io/projected/0be2f95c-5bc7-4080-8396-382e4d3bd7da-kube-api-access-2q82w\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.273144 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.273193 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be2f95c-5bc7-4080-8396-382e4d3bd7da","Type":"ContainerDied","Data":"ff5e8cfb94fa46356d4e381b21cc25c0971edd4d8d9847505fd7ddf866c7d7a7"} Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.273235 4895 scope.go:117] "RemoveContainer" containerID="f7fc0eeeac61074a45e8a57b78362927f6a3286a4c8d6c19aa3d157afad047b4" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.273268 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7kbcl" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="registry-server" containerID="cri-o://b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d" gracePeriod=2 Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.322050 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.338773 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.352592 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:30:16 crc kubenswrapper[4895]: E1206 07:30:16.353188 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be2f95c-5bc7-4080-8396-382e4d3bd7da" containerName="kube-state-metrics" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.353210 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be2f95c-5bc7-4080-8396-382e4d3bd7da" containerName="kube-state-metrics" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.353431 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be2f95c-5bc7-4080-8396-382e4d3bd7da" containerName="kube-state-metrics" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.354282 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.357449 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.357515 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.364781 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.398431 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.398939 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.399019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.399048 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn59c\" (UniqueName: \"kubernetes.io/projected/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-api-access-rn59c\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.501036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.501108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.501137 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn59c\" (UniqueName: \"kubernetes.io/projected/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-api-access-rn59c\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.501207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.509743 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.511244 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.511245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.511392 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.527843 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn59c\" (UniqueName: \"kubernetes.io/projected/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-api-access-rn59c\") pod \"kube-state-metrics-0\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.547318 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.547371 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.565974 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.800671 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.807887 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.910329 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-utilities\") pod \"b8355c19-10c2-4971-96ce-52651efbc5ea\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.910897 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvb9r\" (UniqueName: \"kubernetes.io/projected/b8355c19-10c2-4971-96ce-52651efbc5ea-kube-api-access-jvb9r\") pod \"b8355c19-10c2-4971-96ce-52651efbc5ea\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.910924 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-catalog-content\") pod \"b8355c19-10c2-4971-96ce-52651efbc5ea\" (UID: \"b8355c19-10c2-4971-96ce-52651efbc5ea\") " Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.911133 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-utilities" (OuterVolumeSpecName: "utilities") pod "b8355c19-10c2-4971-96ce-52651efbc5ea" (UID: "b8355c19-10c2-4971-96ce-52651efbc5ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.911763 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.916721 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8355c19-10c2-4971-96ce-52651efbc5ea-kube-api-access-jvb9r" (OuterVolumeSpecName: "kube-api-access-jvb9r") pod "b8355c19-10c2-4971-96ce-52651efbc5ea" (UID: "b8355c19-10c2-4971-96ce-52651efbc5ea"). InnerVolumeSpecName "kube-api-access-jvb9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:16 crc kubenswrapper[4895]: I1206 07:30:16.965325 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8355c19-10c2-4971-96ce-52651efbc5ea" (UID: "b8355c19-10c2-4971-96ce-52651efbc5ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.014170 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvb9r\" (UniqueName: \"kubernetes.io/projected/b8355c19-10c2-4971-96ce-52651efbc5ea-kube-api-access-jvb9r\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.014201 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8355c19-10c2-4971-96ce-52651efbc5ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.283774 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.285863 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-central-agent" containerID="cri-o://cff324c1edc754d222d39305fc82e0b72b3d6b38ae83ce629a9097130ce185f7" gracePeriod=30 Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.286614 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="proxy-httpd" containerID="cri-o://d3c55af038dea0efe97ad29d893a8f23eae09485f87076348bb68b1c698a7826" gracePeriod=30 Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.286678 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="sg-core" containerID="cri-o://fd8ef63afbf61f4fd5dc8446e2f11ba771f90b3ca1a6f03523436a8c8bc6d218" gracePeriod=30 Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.286768 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-notification-agent" containerID="cri-o://3c3886d87bb3c09ebadd92c3323c573f2e312c60b0b2677b3c3be0e56a8667c3" gracePeriod=30 Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.289643 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerID="b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d" exitCode=0 Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.289705 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerDied","Data":"b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d"} Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.289729 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kbcl" event={"ID":"b8355c19-10c2-4971-96ce-52651efbc5ea","Type":"ContainerDied","Data":"0ab46d8074a137669b360f3d74f98b81163b0a628ff25670126fe3966eacd136"} Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.289746 4895 scope.go:117] "RemoveContainer" containerID="b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.289854 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kbcl" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.339060 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.346555 4895 scope.go:117] "RemoveContainer" containerID="9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.350607 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7kbcl"] Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.358168 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7kbcl"] Dec 06 07:30:17 crc kubenswrapper[4895]: W1206 07:30:17.365178 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b21f280_7879_43c2_b1b0_92906707b4cd.slice/crio-57a206edc6557032eb035863456d10e7c0ec8a546f7c46c17d0fc8babb96f3e9 WatchSource:0}: Error finding container 57a206edc6557032eb035863456d10e7c0ec8a546f7c46c17d0fc8babb96f3e9: Status 404 returned error can't find the container with id 57a206edc6557032eb035863456d10e7c0ec8a546f7c46c17d0fc8babb96f3e9 Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.365365 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.403454 4895 scope.go:117] "RemoveContainer" containerID="1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.444154 4895 scope.go:117] "RemoveContainer" containerID="b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d" Dec 06 07:30:17 crc kubenswrapper[4895]: E1206 07:30:17.444764 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d\": container with ID starting with b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d not found: ID does not exist" containerID="b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.444800 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d"} err="failed to get container status \"b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d\": rpc error: code = NotFound desc = could not find container \"b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d\": container with ID starting with b3ec67adb67b9ef7b6bfcd2b3c331e2f663152b630f14929c4da76471c45a91d not found: ID does not exist" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.444830 4895 scope.go:117] "RemoveContainer" containerID="9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1" Dec 06 07:30:17 crc kubenswrapper[4895]: E1206 07:30:17.445148 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1\": container with ID starting with 9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1 not found: ID does not exist" containerID="9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.445175 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1"} err="failed to get container status \"9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1\": rpc error: code = NotFound desc = could not find container \"9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1\": container with ID starting with 9422c294221b1fa96a21f7b3b3417bc72154aecbea2baf3a3256fbde34f4d6e1 not found: ID does not exist" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.445192 4895 scope.go:117] "RemoveContainer" containerID="1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59" Dec 06 07:30:17 crc kubenswrapper[4895]: E1206 07:30:17.445465 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59\": container with ID starting with 1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59 not found: ID does not exist" containerID="1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.445503 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59"} err="failed to get container status \"1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59\": rpc error: code = NotFound desc = could not find container \"1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59\": container with ID starting with 1365d9aa4ba664ae73652f8137e6b481bd53fe92afc9db531379a481c1a23b59 not found: ID does not exist" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.631827 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:30:17 crc kubenswrapper[4895]: I1206 07:30:17.632163 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.084755 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be2f95c-5bc7-4080-8396-382e4d3bd7da" path="/var/lib/kubelet/pods/0be2f95c-5bc7-4080-8396-382e4d3bd7da/volumes" Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.086423 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" path="/var/lib/kubelet/pods/b8355c19-10c2-4971-96ce-52651efbc5ea/volumes" Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.314452 4895 generic.go:334] "Generic (PLEG): container finished" podID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerID="d3c55af038dea0efe97ad29d893a8f23eae09485f87076348bb68b1c698a7826" exitCode=0 Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.314526 4895 generic.go:334] "Generic (PLEG): container finished" podID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerID="fd8ef63afbf61f4fd5dc8446e2f11ba771f90b3ca1a6f03523436a8c8bc6d218" exitCode=2 Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.314541 4895 generic.go:334] "Generic (PLEG): container finished" podID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerID="cff324c1edc754d222d39305fc82e0b72b3d6b38ae83ce629a9097130ce185f7" exitCode=0 Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.314523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerDied","Data":"d3c55af038dea0efe97ad29d893a8f23eae09485f87076348bb68b1c698a7826"} Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.314682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerDied","Data":"fd8ef63afbf61f4fd5dc8446e2f11ba771f90b3ca1a6f03523436a8c8bc6d218"} Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.314707 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerDied","Data":"cff324c1edc754d222d39305fc82e0b72b3d6b38ae83ce629a9097130ce185f7"} Dec 06 07:30:18 crc kubenswrapper[4895]: I1206 07:30:18.318548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b21f280-7879-43c2-b1b0-92906707b4cd","Type":"ContainerStarted","Data":"57a206edc6557032eb035863456d10e7c0ec8a546f7c46c17d0fc8babb96f3e9"} Dec 06 07:30:19 crc kubenswrapper[4895]: I1206 07:30:19.304746 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 07:30:19 crc kubenswrapper[4895]: I1206 07:30:19.306888 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 07:30:19 crc kubenswrapper[4895]: I1206 07:30:19.309913 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 07:30:19 crc kubenswrapper[4895]: I1206 07:30:19.342467 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 07:30:19 crc kubenswrapper[4895]: I1206 07:30:19.580398 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 07:30:20 crc kubenswrapper[4895]: I1206 07:30:20.337686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b21f280-7879-43c2-b1b0-92906707b4cd","Type":"ContainerStarted","Data":"e60969eee4c68df103db40b7065c6b53b4e367127a610a66a55e83f51a3a1f69"} Dec 06 07:30:20 crc kubenswrapper[4895]: I1206 07:30:20.365488 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.379204991 podStartE2EDuration="4.365456265s" podCreationTimestamp="2025-12-06 07:30:16 +0000 UTC" firstStartedPulling="2025-12-06 07:30:17.406188672 +0000 UTC m=+1979.807577542" lastFinishedPulling="2025-12-06 07:30:19.392439946 +0000 UTC m=+1981.793828816" observedRunningTime="2025-12-06 07:30:20.354977327 +0000 UTC m=+1982.756366197" watchObservedRunningTime="2025-12-06 07:30:20.365456265 +0000 UTC m=+1982.766845135" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.363082 4895 generic.go:334] "Generic (PLEG): container finished" podID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerID="3c3886d87bb3c09ebadd92c3323c573f2e312c60b0b2677b3c3be0e56a8667c3" exitCode=0 Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.363350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerDied","Data":"3c3886d87bb3c09ebadd92c3323c573f2e312c60b0b2677b3c3be0e56a8667c3"} Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.364211 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.479331 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.506819 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmpp\" (UniqueName: \"kubernetes.io/projected/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-kube-api-access-fvmpp\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.506860 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-scripts\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.506989 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-log-httpd\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.507105 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-config-data\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.507201 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-combined-ca-bundle\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.507241 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-sg-core-conf-yaml\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.507283 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-run-httpd\") pod \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\" (UID: \"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54\") " Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.509539 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.510225 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.518771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-kube-api-access-fvmpp" (OuterVolumeSpecName: "kube-api-access-fvmpp") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "kube-api-access-fvmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.518896 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-scripts" (OuterVolumeSpecName: "scripts") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.587087 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.609344 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.609375 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.609386 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.609396 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmpp\" (UniqueName: \"kubernetes.io/projected/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-kube-api-access-fvmpp\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.609404 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.618808 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.638347 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-config-data" (OuterVolumeSpecName: "config-data") pod "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" (UID: "2601bba8-7c7d-4a7f-8d47-5e21b54e6f54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.711016 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:21 crc kubenswrapper[4895]: I1206 07:30:21.711322 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.126133 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718c90e3_fda8_453f_95d8_e66acce49d16.slice/crio-1e67b077e7567d51c2f6de82f65dfb793823d102ab154c9c444900c95ee3a36f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2601bba8_7c7d_4a7f_8d47_5e21b54e6f54.slice/crio-c614d64c3d7bcea57647e1db047db8af9ff41810c5b8e1874470bb8df32545a3\": RecentStats: unable to find data in memory cache]" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.378135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2601bba8-7c7d-4a7f-8d47-5e21b54e6f54","Type":"ContainerDied","Data":"c614d64c3d7bcea57647e1db047db8af9ff41810c5b8e1874470bb8df32545a3"} Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.378191 4895 scope.go:117] "RemoveContainer" containerID="d3c55af038dea0efe97ad29d893a8f23eae09485f87076348bb68b1c698a7826" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.378188 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.380822 4895 generic.go:334] "Generic (PLEG): container finished" podID="718c90e3-fda8-453f-95d8-e66acce49d16" containerID="1e67b077e7567d51c2f6de82f65dfb793823d102ab154c9c444900c95ee3a36f" exitCode=137 Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.381778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"718c90e3-fda8-453f-95d8-e66acce49d16","Type":"ContainerDied","Data":"1e67b077e7567d51c2f6de82f65dfb793823d102ab154c9c444900c95ee3a36f"} Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.415736 4895 scope.go:117] "RemoveContainer" containerID="fd8ef63afbf61f4fd5dc8446e2f11ba771f90b3ca1a6f03523436a8c8bc6d218" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.416316 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.439677 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.449100 4895 scope.go:117] "RemoveContainer" containerID="3c3886d87bb3c09ebadd92c3323c573f2e312c60b0b2677b3c3be0e56a8667c3" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.454949 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455514 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="extract-content" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455537 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="extract-content" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455552 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="sg-core" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455560 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="sg-core" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455587 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-notification-agent" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455595 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-notification-agent" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455604 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="extract-utilities" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455612 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="extract-utilities" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455625 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-central-agent" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455632 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-central-agent" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455650 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="registry-server" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455657 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="registry-server" Dec 06 07:30:22 crc kubenswrapper[4895]: E1206 07:30:22.455683 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="proxy-httpd" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455691 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="proxy-httpd" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455906 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="sg-core" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455939 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8355c19-10c2-4971-96ce-52651efbc5ea" containerName="registry-server" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455950 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-notification-agent" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455958 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="proxy-httpd" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.455965 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" containerName="ceilometer-central-agent" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.458057 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.460451 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.460853 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.462566 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.467653 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.483985 4895 scope.go:117] "RemoveContainer" containerID="cff324c1edc754d222d39305fc82e0b72b3d6b38ae83ce629a9097130ce185f7" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.547903 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548081 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svg96\" (UniqueName: \"kubernetes.io/projected/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-kube-api-access-svg96\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548184 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548263 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-config-data\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548318 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-scripts\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548517 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.548656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650609 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svg96\" (UniqueName: \"kubernetes.io/projected/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-kube-api-access-svg96\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650661 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-config-data\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650703 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-scripts\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.650798 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.652451 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.656661 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-scripts\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.656951 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.657059 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.668230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.674742 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-config-data\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.675066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.684691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svg96\" (UniqueName: \"kubernetes.io/projected/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-kube-api-access-svg96\") pod \"ceilometer-0\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.794570 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.912012 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.956170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-combined-ca-bundle\") pod \"718c90e3-fda8-453f-95d8-e66acce49d16\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.956247 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-config-data\") pod \"718c90e3-fda8-453f-95d8-e66acce49d16\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.956325 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkkvn\" (UniqueName: \"kubernetes.io/projected/718c90e3-fda8-453f-95d8-e66acce49d16-kube-api-access-jkkvn\") pod \"718c90e3-fda8-453f-95d8-e66acce49d16\" (UID: \"718c90e3-fda8-453f-95d8-e66acce49d16\") " Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.962046 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718c90e3-fda8-453f-95d8-e66acce49d16-kube-api-access-jkkvn" (OuterVolumeSpecName: "kube-api-access-jkkvn") pod "718c90e3-fda8-453f-95d8-e66acce49d16" (UID: "718c90e3-fda8-453f-95d8-e66acce49d16"). InnerVolumeSpecName "kube-api-access-jkkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.987330 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-config-data" (OuterVolumeSpecName: "config-data") pod "718c90e3-fda8-453f-95d8-e66acce49d16" (UID: "718c90e3-fda8-453f-95d8-e66acce49d16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:22 crc kubenswrapper[4895]: I1206 07:30:22.994641 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "718c90e3-fda8-453f-95d8-e66acce49d16" (UID: "718c90e3-fda8-453f-95d8-e66acce49d16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.058911 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.058945 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkkvn\" (UniqueName: \"kubernetes.io/projected/718c90e3-fda8-453f-95d8-e66acce49d16-kube-api-access-jkkvn\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.058956 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718c90e3-fda8-453f-95d8-e66acce49d16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.274136 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:23 crc kubenswrapper[4895]: W1206 07:30:23.277466 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e4ec35_09c3_4a0e_ad81_4bb6d79f818b.slice/crio-314abcf444a2c5dabab5bc514386c9d94400901eb17d0f82e6aabefdf7aa9fbc WatchSource:0}: Error finding container 314abcf444a2c5dabab5bc514386c9d94400901eb17d0f82e6aabefdf7aa9fbc: Status 404 returned error can't find the container with id 314abcf444a2c5dabab5bc514386c9d94400901eb17d0f82e6aabefdf7aa9fbc Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.400240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"718c90e3-fda8-453f-95d8-e66acce49d16","Type":"ContainerDied","Data":"095f19aff202e7ed5293a0854d04a6ae6fc50034bee8bea3cac4e89531637e90"} Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.400282 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.400307 4895 scope.go:117] "RemoveContainer" containerID="1e67b077e7567d51c2f6de82f65dfb793823d102ab154c9c444900c95ee3a36f" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.405575 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerStarted","Data":"314abcf444a2c5dabab5bc514386c9d94400901eb17d0f82e6aabefdf7aa9fbc"} Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.441221 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.451747 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.467597 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:30:23 crc kubenswrapper[4895]: E1206 07:30:23.468039 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718c90e3-fda8-453f-95d8-e66acce49d16" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.468060 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="718c90e3-fda8-453f-95d8-e66acce49d16" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.468313 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="718c90e3-fda8-453f-95d8-e66acce49d16" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.469100 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.474553 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.474759 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.477227 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.478096 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.568231 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.568371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.568447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb44w\" (UniqueName: \"kubernetes.io/projected/549e9969-79a0-45d9-a093-0b58ad1bc359-kube-api-access-fb44w\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.568519 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.568615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.670587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.671989 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.672197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.672310 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb44w\" (UniqueName: \"kubernetes.io/projected/549e9969-79a0-45d9-a093-0b58ad1bc359-kube-api-access-fb44w\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.672364 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.677981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.678086 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.678564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.682079 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.696174 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb44w\" (UniqueName: \"kubernetes.io/projected/549e9969-79a0-45d9-a093-0b58ad1bc359-kube-api-access-fb44w\") pod \"nova-cell1-novncproxy-0\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:23 crc kubenswrapper[4895]: I1206 07:30:23.788341 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:24 crc kubenswrapper[4895]: I1206 07:30:24.061564 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2601bba8-7c7d-4a7f-8d47-5e21b54e6f54" path="/var/lib/kubelet/pods/2601bba8-7c7d-4a7f-8d47-5e21b54e6f54/volumes" Dec 06 07:30:24 crc kubenswrapper[4895]: I1206 07:30:24.062838 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718c90e3-fda8-453f-95d8-e66acce49d16" path="/var/lib/kubelet/pods/718c90e3-fda8-453f-95d8-e66acce49d16/volumes" Dec 06 07:30:24 crc kubenswrapper[4895]: I1206 07:30:24.256970 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:30:24 crc kubenswrapper[4895]: I1206 07:30:24.444655 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"549e9969-79a0-45d9-a093-0b58ad1bc359","Type":"ContainerStarted","Data":"0c065bf556905aa1adb384bc9e80da8881e65efb45a01f3131fd07885dbc677c"} Dec 06 07:30:25 crc kubenswrapper[4895]: I1206 07:30:25.456355 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerStarted","Data":"a76736e0071323db965dba0d524eeb0bbd91f3f8a70ce3386efe9d50f346280a"} Dec 06 07:30:25 crc kubenswrapper[4895]: I1206 07:30:25.456761 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerStarted","Data":"089fa8ef2c9332afb6edff8f7a80d6d041ed815f0b68e929929f2b113483d6d4"} Dec 06 07:30:25 crc kubenswrapper[4895]: I1206 07:30:25.458269 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"549e9969-79a0-45d9-a093-0b58ad1bc359","Type":"ContainerStarted","Data":"085b9f1f4088bb302b9ed0d8a72f3e2d171c7b741007adb8f5288a5c1833bb96"} Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.475838 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerStarted","Data":"1e35c328f2a6e0e6c1d349332befa21ec7cb996e704da96ac6a0cd97656ee98a"} Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.547627 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.549078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.551871 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.555998 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.580662 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.580642669 podStartE2EDuration="3.580642669s" podCreationTimestamp="2025-12-06 07:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:25.483763942 +0000 UTC m=+1987.885152812" watchObservedRunningTime="2025-12-06 07:30:26.580642669 +0000 UTC m=+1988.982031539" Dec 06 07:30:26 crc kubenswrapper[4895]: I1206 07:30:26.814212 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.486992 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.490383 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.680685 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm"] Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.683237 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.692579 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm"] Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.758188 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-svc\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.758258 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.758360 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.758447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxvd\" (UniqueName: \"kubernetes.io/projected/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-kube-api-access-pnxvd\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.758483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.759955 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-config\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.862649 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxvd\" (UniqueName: \"kubernetes.io/projected/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-kube-api-access-pnxvd\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.862713 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.862755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-config\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.862795 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-svc\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.862837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.862880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.863846 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-svc\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.863922 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-config\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.864686 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.865512 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.868381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:27 crc kubenswrapper[4895]: I1206 07:30:27.887129 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxvd\" (UniqueName: \"kubernetes.io/projected/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-kube-api-access-pnxvd\") pod \"dnsmasq-dns-7f9fbbf6f7-j5qpm\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:28 crc kubenswrapper[4895]: I1206 07:30:28.007829 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:28 crc kubenswrapper[4895]: I1206 07:30:28.498720 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerStarted","Data":"edbebd9a7932983ffc7314ff19eaaddef0363c59a723bc6edf4c42d176f36eeb"} Dec 06 07:30:28 crc kubenswrapper[4895]: I1206 07:30:28.543900 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.81322871 podStartE2EDuration="6.543878423s" podCreationTimestamp="2025-12-06 07:30:22 +0000 UTC" firstStartedPulling="2025-12-06 07:30:23.2796165 +0000 UTC m=+1985.681005370" lastFinishedPulling="2025-12-06 07:30:28.010266213 +0000 UTC m=+1990.411655083" observedRunningTime="2025-12-06 07:30:28.534999627 +0000 UTC m=+1990.936388487" watchObservedRunningTime="2025-12-06 07:30:28.543878423 +0000 UTC m=+1990.945267293" Dec 06 07:30:28 crc kubenswrapper[4895]: I1206 07:30:28.596363 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm"] Dec 06 07:30:28 crc kubenswrapper[4895]: I1206 07:30:28.789700 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:29 crc kubenswrapper[4895]: I1206 07:30:29.511769 4895 generic.go:334] "Generic (PLEG): container finished" podID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerID="06b67abac8970f1634818c6d524e4e49326436da8eb856e8228343ea54865a5d" exitCode=0 Dec 06 07:30:29 crc kubenswrapper[4895]: I1206 07:30:29.511835 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" event={"ID":"f3e26c07-9fe5-4b2b-b7bc-1b008e904792","Type":"ContainerDied","Data":"06b67abac8970f1634818c6d524e4e49326436da8eb856e8228343ea54865a5d"} Dec 06 07:30:29 crc kubenswrapper[4895]: I1206 07:30:29.513133 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" event={"ID":"f3e26c07-9fe5-4b2b-b7bc-1b008e904792","Type":"ContainerStarted","Data":"02976bc0f560ef01053b74fa3719b53859d6b6208b9ed06403281a79423b72ce"} Dec 06 07:30:29 crc kubenswrapper[4895]: I1206 07:30:29.513550 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:30:30 crc kubenswrapper[4895]: I1206 07:30:30.327850 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:30 crc kubenswrapper[4895]: I1206 07:30:30.524604 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" event={"ID":"f3e26c07-9fe5-4b2b-b7bc-1b008e904792","Type":"ContainerStarted","Data":"30e02251ada110b0211ea1fdcacbceb80e38625540fb188e0c03ef63b318da58"} Dec 06 07:30:30 crc kubenswrapper[4895]: I1206 07:30:30.527297 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:30 crc kubenswrapper[4895]: I1206 07:30:30.524965 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-api" containerID="cri-o://da5bb5deff031ab415f53674debfe8d63c42e0184d2b305a5e7857690c3474d1" gracePeriod=30 Dec 06 07:30:30 crc kubenswrapper[4895]: I1206 07:30:30.524693 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-log" containerID="cri-o://cf7361d370ce300f85b0b5f90931411dc56f23e1bd6f8b013adabe1f264d5f8e" gracePeriod=30 Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.328778 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" podStartSLOduration=4.328752605 podStartE2EDuration="4.328752605s" podCreationTimestamp="2025-12-06 07:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:30.551057834 +0000 UTC m=+1992.952446704" watchObservedRunningTime="2025-12-06 07:30:31.328752605 +0000 UTC m=+1993.730141475" Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.332808 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.538735 4895 generic.go:334] "Generic (PLEG): container finished" podID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerID="cf7361d370ce300f85b0b5f90931411dc56f23e1bd6f8b013adabe1f264d5f8e" exitCode=143 Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.539552 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de49f788-1d43-44bd-9d35-eb22835ba7d8","Type":"ContainerDied","Data":"cf7361d370ce300f85b0b5f90931411dc56f23e1bd6f8b013adabe1f264d5f8e"} Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.539824 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-central-agent" containerID="cri-o://089fa8ef2c9332afb6edff8f7a80d6d041ed815f0b68e929929f2b113483d6d4" gracePeriod=30 Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.540168 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="proxy-httpd" containerID="cri-o://edbebd9a7932983ffc7314ff19eaaddef0363c59a723bc6edf4c42d176f36eeb" gracePeriod=30 Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.540245 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-notification-agent" containerID="cri-o://a76736e0071323db965dba0d524eeb0bbd91f3f8a70ce3386efe9d50f346280a" gracePeriod=30 Dec 06 07:30:31 crc kubenswrapper[4895]: I1206 07:30:31.540318 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="sg-core" containerID="cri-o://1e35c328f2a6e0e6c1d349332befa21ec7cb996e704da96ac6a0cd97656ee98a" gracePeriod=30 Dec 06 07:30:32 crc kubenswrapper[4895]: I1206 07:30:32.564856 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerID="edbebd9a7932983ffc7314ff19eaaddef0363c59a723bc6edf4c42d176f36eeb" exitCode=0 Dec 06 07:30:32 crc kubenswrapper[4895]: I1206 07:30:32.565531 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerID="1e35c328f2a6e0e6c1d349332befa21ec7cb996e704da96ac6a0cd97656ee98a" exitCode=2 Dec 06 07:30:32 crc kubenswrapper[4895]: I1206 07:30:32.565546 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerID="a76736e0071323db965dba0d524eeb0bbd91f3f8a70ce3386efe9d50f346280a" exitCode=0 Dec 06 07:30:32 crc kubenswrapper[4895]: I1206 07:30:32.565480 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerDied","Data":"edbebd9a7932983ffc7314ff19eaaddef0363c59a723bc6edf4c42d176f36eeb"} Dec 06 07:30:32 crc kubenswrapper[4895]: I1206 07:30:32.565597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerDied","Data":"1e35c328f2a6e0e6c1d349332befa21ec7cb996e704da96ac6a0cd97656ee98a"} Dec 06 07:30:32 crc kubenswrapper[4895]: I1206 07:30:32.565621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerDied","Data":"a76736e0071323db965dba0d524eeb0bbd91f3f8a70ce3386efe9d50f346280a"} Dec 06 07:30:33 crc kubenswrapper[4895]: I1206 07:30:33.578711 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerID="089fa8ef2c9332afb6edff8f7a80d6d041ed815f0b68e929929f2b113483d6d4" exitCode=0 Dec 06 07:30:33 crc kubenswrapper[4895]: I1206 07:30:33.578753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerDied","Data":"089fa8ef2c9332afb6edff8f7a80d6d041ed815f0b68e929929f2b113483d6d4"} Dec 06 07:30:33 crc kubenswrapper[4895]: I1206 07:30:33.789630 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:33 crc kubenswrapper[4895]: I1206 07:30:33.813620 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.491432 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.602677 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-log-httpd\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.602776 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svg96\" (UniqueName: \"kubernetes.io/projected/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-kube-api-access-svg96\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.602862 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-config-data\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.602952 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-combined-ca-bundle\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.603108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-sg-core-conf-yaml\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.603215 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-scripts\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.603284 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-run-httpd\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.603316 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-ceilometer-tls-certs\") pod \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\" (UID: \"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.605105 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.607267 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.630724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b","Type":"ContainerDied","Data":"314abcf444a2c5dabab5bc514386c9d94400901eb17d0f82e6aabefdf7aa9fbc"} Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.630767 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.630797 4895 scope.go:117] "RemoveContainer" containerID="edbebd9a7932983ffc7314ff19eaaddef0363c59a723bc6edf4c42d176f36eeb" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.637078 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-kube-api-access-svg96" (OuterVolumeSpecName: "kube-api-access-svg96") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "kube-api-access-svg96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.641546 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-scripts" (OuterVolumeSpecName: "scripts") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.653079 4895 generic.go:334] "Generic (PLEG): container finished" podID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerID="da5bb5deff031ab415f53674debfe8d63c42e0184d2b305a5e7857690c3474d1" exitCode=0 Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.653159 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de49f788-1d43-44bd-9d35-eb22835ba7d8","Type":"ContainerDied","Data":"da5bb5deff031ab415f53674debfe8d63c42e0184d2b305a5e7857690c3474d1"} Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.675734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.682742 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.687296 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.712327 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.712572 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svg96\" (UniqueName: \"kubernetes.io/projected/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-kube-api-access-svg96\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.716058 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.716356 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.716460 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.716626 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.743544 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.807393 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-config-data" (OuterVolumeSpecName: "config-data") pod "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" (UID: "e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.820443 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.820742 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.844911 4895 scope.go:117] "RemoveContainer" containerID="1e35c328f2a6e0e6c1d349332befa21ec7cb996e704da96ac6a0cd97656ee98a" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.848676 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pgrzs"] Dec 06 07:30:34 crc kubenswrapper[4895]: E1206 07:30:34.849331 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="sg-core" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849353 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="sg-core" Dec 06 07:30:34 crc kubenswrapper[4895]: E1206 07:30:34.849395 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-central-agent" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849404 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-central-agent" Dec 06 07:30:34 crc kubenswrapper[4895]: E1206 07:30:34.849417 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-notification-agent" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849425 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-notification-agent" Dec 06 07:30:34 crc kubenswrapper[4895]: E1206 07:30:34.849447 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="proxy-httpd" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849455 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="proxy-httpd" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849751 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="proxy-httpd" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849775 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-notification-agent" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849820 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="ceilometer-central-agent" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.849859 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" containerName="sg-core" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.850621 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.850866 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.855069 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.857011 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.889341 4895 scope.go:117] "RemoveContainer" containerID="a76736e0071323db965dba0d524eeb0bbd91f3f8a70ce3386efe9d50f346280a" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.894548 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pgrzs"] Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.921770 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-combined-ca-bundle\") pod \"de49f788-1d43-44bd-9d35-eb22835ba7d8\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.921807 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hjtq\" (UniqueName: \"kubernetes.io/projected/de49f788-1d43-44bd-9d35-eb22835ba7d8-kube-api-access-9hjtq\") pod \"de49f788-1d43-44bd-9d35-eb22835ba7d8\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.921852 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-config-data\") pod \"de49f788-1d43-44bd-9d35-eb22835ba7d8\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.921918 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de49f788-1d43-44bd-9d35-eb22835ba7d8-logs\") pod \"de49f788-1d43-44bd-9d35-eb22835ba7d8\" (UID: \"de49f788-1d43-44bd-9d35-eb22835ba7d8\") " Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.922159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.922199 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-config-data\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.922246 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5jh\" (UniqueName: \"kubernetes.io/projected/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-kube-api-access-rg5jh\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.922312 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-scripts\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.922750 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de49f788-1d43-44bd-9d35-eb22835ba7d8-logs" (OuterVolumeSpecName: "logs") pod "de49f788-1d43-44bd-9d35-eb22835ba7d8" (UID: "de49f788-1d43-44bd-9d35-eb22835ba7d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.927571 4895 scope.go:117] "RemoveContainer" containerID="089fa8ef2c9332afb6edff8f7a80d6d041ed815f0b68e929929f2b113483d6d4" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.927575 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de49f788-1d43-44bd-9d35-eb22835ba7d8-kube-api-access-9hjtq" (OuterVolumeSpecName: "kube-api-access-9hjtq") pod "de49f788-1d43-44bd-9d35-eb22835ba7d8" (UID: "de49f788-1d43-44bd-9d35-eb22835ba7d8"). InnerVolumeSpecName "kube-api-access-9hjtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.964999 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de49f788-1d43-44bd-9d35-eb22835ba7d8" (UID: "de49f788-1d43-44bd-9d35-eb22835ba7d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:34 crc kubenswrapper[4895]: I1206 07:30:34.969027 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-config-data" (OuterVolumeSpecName: "config-data") pod "de49f788-1d43-44bd-9d35-eb22835ba7d8" (UID: "de49f788-1d43-44bd-9d35-eb22835ba7d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.009284 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024316 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-config-data\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg5jh\" (UniqueName: \"kubernetes.io/projected/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-kube-api-access-rg5jh\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024563 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-scripts\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024838 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de49f788-1d43-44bd-9d35-eb22835ba7d8-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024863 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024879 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hjtq\" (UniqueName: \"kubernetes.io/projected/de49f788-1d43-44bd-9d35-eb22835ba7d8-kube-api-access-9hjtq\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.024892 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de49f788-1d43-44bd-9d35-eb22835ba7d8-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.036485 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.039253 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.039942 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-scripts\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.040542 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-config-data\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.051824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg5jh\" (UniqueName: \"kubernetes.io/projected/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-kube-api-access-rg5jh\") pod \"nova-cell1-cell-mapping-pgrzs\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.051898 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: E1206 07:30:35.052365 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-api" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.052381 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-api" Dec 06 07:30:35 crc kubenswrapper[4895]: E1206 07:30:35.052409 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-log" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.052416 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-log" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.052669 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-api" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.052698 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" containerName="nova-api-log" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.054741 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.059053 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.059702 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.059939 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.061328 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.126150 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.126429 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-log-httpd\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.126873 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-config-data\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.127022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.127251 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-scripts\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.127452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.127628 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-run-httpd\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.127810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbfx\" (UniqueName: \"kubernetes.io/projected/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-kube-api-access-mxbfx\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.191118 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.229759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-scripts\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-run-httpd\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbfx\" (UniqueName: \"kubernetes.io/projected/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-kube-api-access-mxbfx\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230376 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-log-httpd\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.230714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-config-data\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.231155 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-run-httpd\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.231241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-log-httpd\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.234433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.235279 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.235957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-scripts\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.236300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.242207 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-config-data\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.250220 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbfx\" (UniqueName: \"kubernetes.io/projected/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-kube-api-access-mxbfx\") pod \"ceilometer-0\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.374391 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.680146 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.681998 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de49f788-1d43-44bd-9d35-eb22835ba7d8","Type":"ContainerDied","Data":"559604f53db4a601900527f788f6bdc5ed7ee3b4406886f0574844fef4b1e8ca"} Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.682065 4895 scope.go:117] "RemoveContainer" containerID="da5bb5deff031ab415f53674debfe8d63c42e0184d2b305a5e7857690c3474d1" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.721177 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pgrzs"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.763562 4895 scope.go:117] "RemoveContainer" containerID="cf7361d370ce300f85b0b5f90931411dc56f23e1bd6f8b013adabe1f264d5f8e" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.801566 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.833456 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.892558 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.894548 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.897661 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.906150 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.911336 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.911654 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.942492 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.950003 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.950073 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ef3f64-1c59-4280-8d57-59dceebdd183-logs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.950104 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9k8v\" (UniqueName: \"kubernetes.io/projected/68ef3f64-1c59-4280-8d57-59dceebdd183-kube-api-access-t9k8v\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.950170 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-internal-tls-certs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.950238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-config-data\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:35 crc kubenswrapper[4895]: I1206 07:30:35.950282 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-public-tls-certs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.052148 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.052210 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ef3f64-1c59-4280-8d57-59dceebdd183-logs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.052240 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9k8v\" (UniqueName: \"kubernetes.io/projected/68ef3f64-1c59-4280-8d57-59dceebdd183-kube-api-access-t9k8v\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.052300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-internal-tls-certs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.052386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-config-data\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.052425 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-public-tls-certs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.053056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ef3f64-1c59-4280-8d57-59dceebdd183-logs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.057109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-internal-tls-certs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.057158 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-public-tls-certs\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.057164 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-config-data\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.057767 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.062463 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de49f788-1d43-44bd-9d35-eb22835ba7d8" path="/var/lib/kubelet/pods/de49f788-1d43-44bd-9d35-eb22835ba7d8/volumes" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.063092 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b" path="/var/lib/kubelet/pods/e2e4ec35-09c3-4a0e-ad81-4bb6d79f818b/volumes" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.072821 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9k8v\" (UniqueName: \"kubernetes.io/projected/68ef3f64-1c59-4280-8d57-59dceebdd183-kube-api-access-t9k8v\") pod \"nova-api-0\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.262944 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.691725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerStarted","Data":"11765b9ca5a2400328fdc886e0cdc6549f6e06b20e872e0db9a78ad0cc5af704"} Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.695924 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgrzs" event={"ID":"9d6f8ca1-f659-4ab6-86bf-c67f017c4166","Type":"ContainerStarted","Data":"6e1556b516a1334be67aa2e2b979bb696a89de71989f4de65e18076edd4df9f5"} Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.695971 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgrzs" event={"ID":"9d6f8ca1-f659-4ab6-86bf-c67f017c4166","Type":"ContainerStarted","Data":"803e0c8bd86a0e5d539861c153eedb6dad1b6c6905dfa2dd0ef219bd9eb8ef0e"} Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.721761 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pgrzs" podStartSLOduration=2.721708805 podStartE2EDuration="2.721708805s" podCreationTimestamp="2025-12-06 07:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:36.717062812 +0000 UTC m=+1999.118451692" watchObservedRunningTime="2025-12-06 07:30:36.721708805 +0000 UTC m=+1999.123097675" Dec 06 07:30:36 crc kubenswrapper[4895]: I1206 07:30:36.738447 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:36 crc kubenswrapper[4895]: W1206 07:30:36.749691 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68ef3f64_1c59_4280_8d57_59dceebdd183.slice/crio-7c87445c346a0ee35400509fa4b3f7be11e91feeb926b6a744f73047c89c7058 WatchSource:0}: Error finding container 7c87445c346a0ee35400509fa4b3f7be11e91feeb926b6a744f73047c89c7058: Status 404 returned error can't find the container with id 7c87445c346a0ee35400509fa4b3f7be11e91feeb926b6a744f73047c89c7058 Dec 06 07:30:37 crc kubenswrapper[4895]: I1206 07:30:37.711390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerStarted","Data":"bb9f938ac64e7d7cade7485c2af22ce637c88ad45a88b16d33b845f5c54d44ad"} Dec 06 07:30:37 crc kubenswrapper[4895]: I1206 07:30:37.711777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerStarted","Data":"1aad065597e380c56e43f02c6466794b5700558a3248e33cb15ece6837bdd069"} Dec 06 07:30:37 crc kubenswrapper[4895]: I1206 07:30:37.716030 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68ef3f64-1c59-4280-8d57-59dceebdd183","Type":"ContainerStarted","Data":"2ef2e0a44efc98b5a1f45150f0fa73c1628e5c408df6e4e46ff280bd335831bc"} Dec 06 07:30:37 crc kubenswrapper[4895]: I1206 07:30:37.716066 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68ef3f64-1c59-4280-8d57-59dceebdd183","Type":"ContainerStarted","Data":"ed2d74d500aefb2cafbe99683440da912a2fb3e6e064de0efe0ae7160db4e8dc"} Dec 06 07:30:37 crc kubenswrapper[4895]: I1206 07:30:37.716080 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68ef3f64-1c59-4280-8d57-59dceebdd183","Type":"ContainerStarted","Data":"7c87445c346a0ee35400509fa4b3f7be11e91feeb926b6a744f73047c89c7058"} Dec 06 07:30:37 crc kubenswrapper[4895]: I1206 07:30:37.744812 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.744793833 podStartE2EDuration="2.744793833s" podCreationTimestamp="2025-12-06 07:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:37.742640085 +0000 UTC m=+2000.144028975" watchObservedRunningTime="2025-12-06 07:30:37.744793833 +0000 UTC m=+2000.146182713" Dec 06 07:30:38 crc kubenswrapper[4895]: I1206 07:30:38.012073 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:30:38 crc kubenswrapper[4895]: I1206 07:30:38.106221 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-vb7hb"] Dec 06 07:30:38 crc kubenswrapper[4895]: I1206 07:30:38.106521 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerName="dnsmasq-dns" containerID="cri-o://6efe3cd9ffddc1dd6a5a17f97c93720554708e478eff343374cae1421f501af3" gracePeriod=10 Dec 06 07:30:38 crc kubenswrapper[4895]: I1206 07:30:38.725690 4895 generic.go:334] "Generic (PLEG): container finished" podID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerID="6efe3cd9ffddc1dd6a5a17f97c93720554708e478eff343374cae1421f501af3" exitCode=0 Dec 06 07:30:38 crc kubenswrapper[4895]: I1206 07:30:38.725775 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" event={"ID":"3a782179-04fe-4b9c-a05a-27f14cb5ddf6","Type":"ContainerDied","Data":"6efe3cd9ffddc1dd6a5a17f97c93720554708e478eff343374cae1421f501af3"} Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.423638 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.539354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-svc\") pod \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.539537 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-swift-storage-0\") pod \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.539558 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7sbf\" (UniqueName: \"kubernetes.io/projected/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-kube-api-access-f7sbf\") pod \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.541446 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-nb\") pod \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.541518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-config\") pod \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.541538 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-sb\") pod \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\" (UID: \"3a782179-04fe-4b9c-a05a-27f14cb5ddf6\") " Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.546225 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-kube-api-access-f7sbf" (OuterVolumeSpecName: "kube-api-access-f7sbf") pod "3a782179-04fe-4b9c-a05a-27f14cb5ddf6" (UID: "3a782179-04fe-4b9c-a05a-27f14cb5ddf6"). InnerVolumeSpecName "kube-api-access-f7sbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.595612 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a782179-04fe-4b9c-a05a-27f14cb5ddf6" (UID: "3a782179-04fe-4b9c-a05a-27f14cb5ddf6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.596989 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a782179-04fe-4b9c-a05a-27f14cb5ddf6" (UID: "3a782179-04fe-4b9c-a05a-27f14cb5ddf6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.597780 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a782179-04fe-4b9c-a05a-27f14cb5ddf6" (UID: "3a782179-04fe-4b9c-a05a-27f14cb5ddf6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.598711 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a782179-04fe-4b9c-a05a-27f14cb5ddf6" (UID: "3a782179-04fe-4b9c-a05a-27f14cb5ddf6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.623798 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-config" (OuterVolumeSpecName: "config") pod "3a782179-04fe-4b9c-a05a-27f14cb5ddf6" (UID: "3a782179-04fe-4b9c-a05a-27f14cb5ddf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.644759 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.644794 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7sbf\" (UniqueName: \"kubernetes.io/projected/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-kube-api-access-f7sbf\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.644806 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.644815 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.644824 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.644832 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a782179-04fe-4b9c-a05a-27f14cb5ddf6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.749843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerStarted","Data":"24e243d683f4d1e457142494c7ae9f7b77e4d37cba6f0178e3fdb24f773dad15"} Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.752555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" event={"ID":"3a782179-04fe-4b9c-a05a-27f14cb5ddf6","Type":"ContainerDied","Data":"fca4044c338114527389d898a677d162105b43c6612aa37374ee2e3110d4bd7f"} Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.752595 4895 scope.go:117] "RemoveContainer" containerID="6efe3cd9ffddc1dd6a5a17f97c93720554708e478eff343374cae1421f501af3" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.752850 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-vb7hb" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.809401 4895 scope.go:117] "RemoveContainer" containerID="a0a6f82cbf80eac9d14afed35ceed4d1ff024109c579614d75c4c7bef85ed700" Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.826288 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-vb7hb"] Dec 06 07:30:39 crc kubenswrapper[4895]: I1206 07:30:39.837935 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-vb7hb"] Dec 06 07:30:40 crc kubenswrapper[4895]: I1206 07:30:40.062459 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" path="/var/lib/kubelet/pods/3a782179-04fe-4b9c-a05a-27f14cb5ddf6/volumes" Dec 06 07:30:41 crc kubenswrapper[4895]: I1206 07:30:41.786926 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerStarted","Data":"f608b8f2d91f359e66e47a2170b12aa2033c9aa3fd46ebe5c13e759c317a8d0b"} Dec 06 07:30:41 crc kubenswrapper[4895]: I1206 07:30:41.787726 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 07:30:41 crc kubenswrapper[4895]: I1206 07:30:41.822989 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.071287857 podStartE2EDuration="7.822960628s" podCreationTimestamp="2025-12-06 07:30:34 +0000 UTC" firstStartedPulling="2025-12-06 07:30:35.965846053 +0000 UTC m=+1998.367234933" lastFinishedPulling="2025-12-06 07:30:40.717518834 +0000 UTC m=+2003.118907704" observedRunningTime="2025-12-06 07:30:41.815768987 +0000 UTC m=+2004.217157857" watchObservedRunningTime="2025-12-06 07:30:41.822960628 +0000 UTC m=+2004.224349498" Dec 06 07:30:42 crc kubenswrapper[4895]: I1206 07:30:42.803962 4895 generic.go:334] "Generic (PLEG): container finished" podID="9d6f8ca1-f659-4ab6-86bf-c67f017c4166" containerID="6e1556b516a1334be67aa2e2b979bb696a89de71989f4de65e18076edd4df9f5" exitCode=0 Dec 06 07:30:42 crc kubenswrapper[4895]: I1206 07:30:42.804078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgrzs" event={"ID":"9d6f8ca1-f659-4ab6-86bf-c67f017c4166","Type":"ContainerDied","Data":"6e1556b516a1334be67aa2e2b979bb696a89de71989f4de65e18076edd4df9f5"} Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.317911 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.372160 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-combined-ca-bundle\") pod \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.421177 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg5jh\" (UniqueName: \"kubernetes.io/projected/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-kube-api-access-rg5jh\") pod \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.421320 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-config-data\") pod \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.421384 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-scripts\") pod \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\" (UID: \"9d6f8ca1-f659-4ab6-86bf-c67f017c4166\") " Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.430838 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-scripts" (OuterVolumeSpecName: "scripts") pod "9d6f8ca1-f659-4ab6-86bf-c67f017c4166" (UID: "9d6f8ca1-f659-4ab6-86bf-c67f017c4166"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.445290 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-kube-api-access-rg5jh" (OuterVolumeSpecName: "kube-api-access-rg5jh") pod "9d6f8ca1-f659-4ab6-86bf-c67f017c4166" (UID: "9d6f8ca1-f659-4ab6-86bf-c67f017c4166"). InnerVolumeSpecName "kube-api-access-rg5jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.458695 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d6f8ca1-f659-4ab6-86bf-c67f017c4166" (UID: "9d6f8ca1-f659-4ab6-86bf-c67f017c4166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.485059 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-config-data" (OuterVolumeSpecName: "config-data") pod "9d6f8ca1-f659-4ab6-86bf-c67f017c4166" (UID: "9d6f8ca1-f659-4ab6-86bf-c67f017c4166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.525195 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.525243 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg5jh\" (UniqueName: \"kubernetes.io/projected/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-kube-api-access-rg5jh\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.525266 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.525280 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6f8ca1-f659-4ab6-86bf-c67f017c4166-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.833294 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgrzs" event={"ID":"9d6f8ca1-f659-4ab6-86bf-c67f017c4166","Type":"ContainerDied","Data":"803e0c8bd86a0e5d539861c153eedb6dad1b6c6905dfa2dd0ef219bd9eb8ef0e"} Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.833341 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803e0c8bd86a0e5d539861c153eedb6dad1b6c6905dfa2dd0ef219bd9eb8ef0e" Dec 06 07:30:44 crc kubenswrapper[4895]: I1206 07:30:44.833421 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgrzs" Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.015917 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.016372 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-log" containerID="cri-o://ed2d74d500aefb2cafbe99683440da912a2fb3e6e064de0efe0ae7160db4e8dc" gracePeriod=30 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.016408 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-api" containerID="cri-o://2ef2e0a44efc98b5a1f45150f0fa73c1628e5c408df6e4e46ff280bd335831bc" gracePeriod=30 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.033923 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.034235 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e13d1a77-c207-4334-b32f-f2befaf2768c" containerName="nova-scheduler-scheduler" containerID="cri-o://9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9" gracePeriod=30 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.055379 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.055819 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-log" containerID="cri-o://6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c" gracePeriod=30 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.055849 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-metadata" containerID="cri-o://9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8" gracePeriod=30 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.849173 4895 generic.go:334] "Generic (PLEG): container finished" podID="91395243-4043-49b6-869c-05d21691a2f3" containerID="6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c" exitCode=143 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.849276 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91395243-4043-49b6-869c-05d21691a2f3","Type":"ContainerDied","Data":"6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c"} Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.852100 4895 generic.go:334] "Generic (PLEG): container finished" podID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerID="2ef2e0a44efc98b5a1f45150f0fa73c1628e5c408df6e4e46ff280bd335831bc" exitCode=0 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.852140 4895 generic.go:334] "Generic (PLEG): container finished" podID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerID="ed2d74d500aefb2cafbe99683440da912a2fb3e6e064de0efe0ae7160db4e8dc" exitCode=143 Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.852162 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68ef3f64-1c59-4280-8d57-59dceebdd183","Type":"ContainerDied","Data":"2ef2e0a44efc98b5a1f45150f0fa73c1628e5c408df6e4e46ff280bd335831bc"} Dec 06 07:30:45 crc kubenswrapper[4895]: I1206 07:30:45.852225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68ef3f64-1c59-4280-8d57-59dceebdd183","Type":"ContainerDied","Data":"ed2d74d500aefb2cafbe99683440da912a2fb3e6e064de0efe0ae7160db4e8dc"} Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.162900 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.257663 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-internal-tls-certs\") pod \"68ef3f64-1c59-4280-8d57-59dceebdd183\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.259333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-config-data\") pod \"68ef3f64-1c59-4280-8d57-59dceebdd183\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.259697 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-public-tls-certs\") pod \"68ef3f64-1c59-4280-8d57-59dceebdd183\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.260143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-combined-ca-bundle\") pod \"68ef3f64-1c59-4280-8d57-59dceebdd183\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.260275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9k8v\" (UniqueName: \"kubernetes.io/projected/68ef3f64-1c59-4280-8d57-59dceebdd183-kube-api-access-t9k8v\") pod \"68ef3f64-1c59-4280-8d57-59dceebdd183\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.260463 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ef3f64-1c59-4280-8d57-59dceebdd183-logs\") pod \"68ef3f64-1c59-4280-8d57-59dceebdd183\" (UID: \"68ef3f64-1c59-4280-8d57-59dceebdd183\") " Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.260725 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68ef3f64-1c59-4280-8d57-59dceebdd183-logs" (OuterVolumeSpecName: "logs") pod "68ef3f64-1c59-4280-8d57-59dceebdd183" (UID: "68ef3f64-1c59-4280-8d57-59dceebdd183"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.261190 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ef3f64-1c59-4280-8d57-59dceebdd183-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.266127 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ef3f64-1c59-4280-8d57-59dceebdd183-kube-api-access-t9k8v" (OuterVolumeSpecName: "kube-api-access-t9k8v") pod "68ef3f64-1c59-4280-8d57-59dceebdd183" (UID: "68ef3f64-1c59-4280-8d57-59dceebdd183"). InnerVolumeSpecName "kube-api-access-t9k8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.293753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-config-data" (OuterVolumeSpecName: "config-data") pod "68ef3f64-1c59-4280-8d57-59dceebdd183" (UID: "68ef3f64-1c59-4280-8d57-59dceebdd183"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.295654 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68ef3f64-1c59-4280-8d57-59dceebdd183" (UID: "68ef3f64-1c59-4280-8d57-59dceebdd183"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.325695 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "68ef3f64-1c59-4280-8d57-59dceebdd183" (UID: "68ef3f64-1c59-4280-8d57-59dceebdd183"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.326335 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "68ef3f64-1c59-4280-8d57-59dceebdd183" (UID: "68ef3f64-1c59-4280-8d57-59dceebdd183"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.363848 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.363930 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.363945 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.363979 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef3f64-1c59-4280-8d57-59dceebdd183-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.363991 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9k8v\" (UniqueName: \"kubernetes.io/projected/68ef3f64-1c59-4280-8d57-59dceebdd183-kube-api-access-t9k8v\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.519968 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.521967 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.530291 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.530384 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e13d1a77-c207-4334-b32f-f2befaf2768c" containerName="nova-scheduler-scheduler" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.868992 4895 generic.go:334] "Generic (PLEG): container finished" podID="e13d1a77-c207-4334-b32f-f2befaf2768c" containerID="9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9" exitCode=0 Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.869096 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e13d1a77-c207-4334-b32f-f2befaf2768c","Type":"ContainerDied","Data":"9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9"} Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.871697 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68ef3f64-1c59-4280-8d57-59dceebdd183","Type":"ContainerDied","Data":"7c87445c346a0ee35400509fa4b3f7be11e91feeb926b6a744f73047c89c7058"} Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.871752 4895 scope.go:117] "RemoveContainer" containerID="2ef2e0a44efc98b5a1f45150f0fa73c1628e5c408df6e4e46ff280bd335831bc" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.871778 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.931920 4895 scope.go:117] "RemoveContainer" containerID="ed2d74d500aefb2cafbe99683440da912a2fb3e6e064de0efe0ae7160db4e8dc" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.934307 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.959145 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.972979 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.973509 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerName="dnsmasq-dns" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973528 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerName="dnsmasq-dns" Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.973542 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6f8ca1-f659-4ab6-86bf-c67f017c4166" containerName="nova-manage" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973549 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6f8ca1-f659-4ab6-86bf-c67f017c4166" containerName="nova-manage" Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.973568 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerName="init" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973576 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerName="init" Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.973596 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-api" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973605 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-api" Dec 06 07:30:46 crc kubenswrapper[4895]: E1206 07:30:46.973625 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-log" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973633 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-log" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973868 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-log" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973898 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" containerName="nova-api-api" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973909 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a782179-04fe-4b9c-a05a-27f14cb5ddf6" containerName="dnsmasq-dns" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.973926 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6f8ca1-f659-4ab6-86bf-c67f017c4166" containerName="nova-manage" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.975342 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.980740 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.981008 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.982775 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 07:30:46 crc kubenswrapper[4895]: I1206 07:30:46.991219 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.028253 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4vnb\" (UniqueName: \"kubernetes.io/projected/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-kube-api-access-m4vnb\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.028308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-config-data\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.028361 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-logs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.028406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.028508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.028538 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-public-tls-certs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.130532 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4vnb\" (UniqueName: \"kubernetes.io/projected/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-kube-api-access-m4vnb\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.130656 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-config-data\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.131758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-logs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.131926 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.132052 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.132089 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-public-tls-certs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.132551 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-logs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.135972 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.138991 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.139068 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-public-tls-certs\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.144298 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-config-data\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.153568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4vnb\" (UniqueName: \"kubernetes.io/projected/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-kube-api-access-m4vnb\") pod \"nova-api-0\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.295490 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.622668 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.745954 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-config-data\") pod \"e13d1a77-c207-4334-b32f-f2befaf2768c\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.746092 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jghm6\" (UniqueName: \"kubernetes.io/projected/e13d1a77-c207-4334-b32f-f2befaf2768c-kube-api-access-jghm6\") pod \"e13d1a77-c207-4334-b32f-f2befaf2768c\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.746286 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-combined-ca-bundle\") pod \"e13d1a77-c207-4334-b32f-f2befaf2768c\" (UID: \"e13d1a77-c207-4334-b32f-f2befaf2768c\") " Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.755873 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13d1a77-c207-4334-b32f-f2befaf2768c-kube-api-access-jghm6" (OuterVolumeSpecName: "kube-api-access-jghm6") pod "e13d1a77-c207-4334-b32f-f2befaf2768c" (UID: "e13d1a77-c207-4334-b32f-f2befaf2768c"). InnerVolumeSpecName "kube-api-access-jghm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.825927 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-config-data" (OuterVolumeSpecName: "config-data") pod "e13d1a77-c207-4334-b32f-f2befaf2768c" (UID: "e13d1a77-c207-4334-b32f-f2befaf2768c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.827065 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e13d1a77-c207-4334-b32f-f2befaf2768c" (UID: "e13d1a77-c207-4334-b32f-f2befaf2768c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.848719 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.848755 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jghm6\" (UniqueName: \"kubernetes.io/projected/e13d1a77-c207-4334-b32f-f2befaf2768c-kube-api-access-jghm6\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.848767 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13d1a77-c207-4334-b32f-f2befaf2768c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.868901 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.885068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e13d1a77-c207-4334-b32f-f2befaf2768c","Type":"ContainerDied","Data":"c1803ae66da608d78f727c0297429d1ab93d8efa9441a5cc12aa3003a2267fa4"} Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.885095 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.885170 4895 scope.go:117] "RemoveContainer" containerID="9e24febf64e3ffd54ae69d3eca6fabc0199bffda6a3ea66b2cdfff15ac22fde9" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.886726 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124","Type":"ContainerStarted","Data":"a935bace671adbd26aaaae2fa45e0848dddd673cdbd0fbfdb49bc14c3952f0d6"} Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.926561 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.945505 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.959036 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:47 crc kubenswrapper[4895]: E1206 07:30:47.959588 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13d1a77-c207-4334-b32f-f2befaf2768c" containerName="nova-scheduler-scheduler" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.959612 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13d1a77-c207-4334-b32f-f2befaf2768c" containerName="nova-scheduler-scheduler" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.959861 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13d1a77-c207-4334-b32f-f2befaf2768c" containerName="nova-scheduler-scheduler" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.960718 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.964282 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 07:30:47 crc kubenswrapper[4895]: I1206 07:30:47.969269 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.053354 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvsh5\" (UniqueName: \"kubernetes.io/projected/53540f68-ae58-4d76-870b-3cc4b77eb1e3-kube-api-access-gvsh5\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.053412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.053451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-config-data\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.064246 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ef3f64-1c59-4280-8d57-59dceebdd183" path="/var/lib/kubelet/pods/68ef3f64-1c59-4280-8d57-59dceebdd183/volumes" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.065017 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13d1a77-c207-4334-b32f-f2befaf2768c" path="/var/lib/kubelet/pods/e13d1a77-c207-4334-b32f-f2befaf2768c/volumes" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.155281 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvsh5\" (UniqueName: \"kubernetes.io/projected/53540f68-ae58-4d76-870b-3cc4b77eb1e3-kube-api-access-gvsh5\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.155375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.155504 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-config-data\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.161976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.162015 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-config-data\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.175824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvsh5\" (UniqueName: \"kubernetes.io/projected/53540f68-ae58-4d76-870b-3cc4b77eb1e3-kube-api-access-gvsh5\") pod \"nova-scheduler-0\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.300919 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.736454 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.947651 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-nova-metadata-tls-certs\") pod \"91395243-4043-49b6-869c-05d21691a2f3\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.947723 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91395243-4043-49b6-869c-05d21691a2f3-logs\") pod \"91395243-4043-49b6-869c-05d21691a2f3\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.947810 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4vb5\" (UniqueName: \"kubernetes.io/projected/91395243-4043-49b6-869c-05d21691a2f3-kube-api-access-d4vb5\") pod \"91395243-4043-49b6-869c-05d21691a2f3\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.947890 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-combined-ca-bundle\") pod \"91395243-4043-49b6-869c-05d21691a2f3\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.947932 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-config-data\") pod \"91395243-4043-49b6-869c-05d21691a2f3\" (UID: \"91395243-4043-49b6-869c-05d21691a2f3\") " Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.948718 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91395243-4043-49b6-869c-05d21691a2f3-logs" (OuterVolumeSpecName: "logs") pod "91395243-4043-49b6-869c-05d21691a2f3" (UID: "91395243-4043-49b6-869c-05d21691a2f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.968757 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91395243-4043-49b6-869c-05d21691a2f3-kube-api-access-d4vb5" (OuterVolumeSpecName: "kube-api-access-d4vb5") pod "91395243-4043-49b6-869c-05d21691a2f3" (UID: "91395243-4043-49b6-869c-05d21691a2f3"). InnerVolumeSpecName "kube-api-access-d4vb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.976506 4895 generic.go:334] "Generic (PLEG): container finished" podID="91395243-4043-49b6-869c-05d21691a2f3" containerID="9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8" exitCode=0 Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.976579 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91395243-4043-49b6-869c-05d21691a2f3","Type":"ContainerDied","Data":"9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8"} Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.976609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91395243-4043-49b6-869c-05d21691a2f3","Type":"ContainerDied","Data":"5ad5014ecc573a53d8df125c530d4a7edf5640de9db614469a08454fbd38349e"} Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.976626 4895 scope.go:117] "RemoveContainer" containerID="9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.976726 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.988882 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124","Type":"ContainerStarted","Data":"b540a38897b5cccc650243ffed7745515a30b25bc0e32302ca392cf373ef7ac7"} Dec 06 07:30:48 crc kubenswrapper[4895]: I1206 07:30:48.988941 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124","Type":"ContainerStarted","Data":"1c15337273095c402baf16f57c52cac5a3445669c2517cbedf79ba80a4d78a94"} Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.004734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91395243-4043-49b6-869c-05d21691a2f3" (UID: "91395243-4043-49b6-869c-05d21691a2f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.004832 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.016678 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-config-data" (OuterVolumeSpecName: "config-data") pod "91395243-4043-49b6-869c-05d21691a2f3" (UID: "91395243-4043-49b6-869c-05d21691a2f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.035921 4895 scope.go:117] "RemoveContainer" containerID="6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.040232 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.040208922 podStartE2EDuration="3.040208922s" podCreationTimestamp="2025-12-06 07:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:49.023911319 +0000 UTC m=+2011.425300189" watchObservedRunningTime="2025-12-06 07:30:49.040208922 +0000 UTC m=+2011.441597792" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.047222 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "91395243-4043-49b6-869c-05d21691a2f3" (UID: "91395243-4043-49b6-869c-05d21691a2f3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:49 crc kubenswrapper[4895]: W1206 07:30:49.047255 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53540f68_ae58_4d76_870b_3cc4b77eb1e3.slice/crio-ca7341d7bd86a4d53abd5ee468943e8fc74430aadcf0d0bceaeee6c7c8cc3503 WatchSource:0}: Error finding container ca7341d7bd86a4d53abd5ee468943e8fc74430aadcf0d0bceaeee6c7c8cc3503: Status 404 returned error can't find the container with id ca7341d7bd86a4d53abd5ee468943e8fc74430aadcf0d0bceaeee6c7c8cc3503 Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.051756 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.052649 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91395243-4043-49b6-869c-05d21691a2f3-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.053103 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4vb5\" (UniqueName: \"kubernetes.io/projected/91395243-4043-49b6-869c-05d21691a2f3-kube-api-access-d4vb5\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.053230 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.053252 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91395243-4043-49b6-869c-05d21691a2f3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.069247 4895 scope.go:117] "RemoveContainer" containerID="9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8" Dec 06 07:30:49 crc kubenswrapper[4895]: E1206 07:30:49.071383 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8\": container with ID starting with 9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8 not found: ID does not exist" containerID="9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.071525 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8"} err="failed to get container status \"9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8\": rpc error: code = NotFound desc = could not find container \"9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8\": container with ID starting with 9a95fcce3c2cab8baddaf03a621ac95f48d2a4bd2806244375ec84cc508300e8 not found: ID does not exist" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.071642 4895 scope.go:117] "RemoveContainer" containerID="6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c" Dec 06 07:30:49 crc kubenswrapper[4895]: E1206 07:30:49.072233 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c\": container with ID starting with 6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c not found: ID does not exist" containerID="6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.072290 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c"} err="failed to get container status \"6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c\": rpc error: code = NotFound desc = could not find container \"6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c\": container with ID starting with 6a0f924af8e426fda2871fe8bfbefd9d0f577cdb9fbc5f6d3644eb208dfb0a2c not found: ID does not exist" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.373063 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.382910 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.399989 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:30:49 crc kubenswrapper[4895]: E1206 07:30:49.400712 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-log" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.400738 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-log" Dec 06 07:30:49 crc kubenswrapper[4895]: E1206 07:30:49.400763 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-metadata" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.400773 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-metadata" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.401013 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-log" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.401045 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="91395243-4043-49b6-869c-05d21691a2f3" containerName="nova-metadata-metadata" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.403021 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.405692 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.405940 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.414588 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.464856 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-config-data\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.465057 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.465176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04293385-9ad8-4686-a3d3-e39d586a7e6f-logs\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.465293 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.465381 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vcwb\" (UniqueName: \"kubernetes.io/projected/04293385-9ad8-4686-a3d3-e39d586a7e6f-kube-api-access-5vcwb\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.566509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-config-data\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.566659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.566713 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04293385-9ad8-4686-a3d3-e39d586a7e6f-logs\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.566786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.566854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vcwb\" (UniqueName: \"kubernetes.io/projected/04293385-9ad8-4686-a3d3-e39d586a7e6f-kube-api-access-5vcwb\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.567620 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04293385-9ad8-4686-a3d3-e39d586a7e6f-logs\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.578243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-config-data\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.579230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.585627 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.592095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vcwb\" (UniqueName: \"kubernetes.io/projected/04293385-9ad8-4686-a3d3-e39d586a7e6f-kube-api-access-5vcwb\") pod \"nova-metadata-0\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " pod="openstack/nova-metadata-0" Dec 06 07:30:49 crc kubenswrapper[4895]: I1206 07:30:49.725198 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:30:50 crc kubenswrapper[4895]: I1206 07:30:50.019920 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53540f68-ae58-4d76-870b-3cc4b77eb1e3","Type":"ContainerStarted","Data":"ca7341d7bd86a4d53abd5ee468943e8fc74430aadcf0d0bceaeee6c7c8cc3503"} Dec 06 07:30:50 crc kubenswrapper[4895]: I1206 07:30:50.083827 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91395243-4043-49b6-869c-05d21691a2f3" path="/var/lib/kubelet/pods/91395243-4043-49b6-869c-05d21691a2f3/volumes" Dec 06 07:30:50 crc kubenswrapper[4895]: I1206 07:30:50.212039 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:30:51 crc kubenswrapper[4895]: I1206 07:30:51.033251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53540f68-ae58-4d76-870b-3cc4b77eb1e3","Type":"ContainerStarted","Data":"858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893"} Dec 06 07:30:51 crc kubenswrapper[4895]: I1206 07:30:51.038916 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04293385-9ad8-4686-a3d3-e39d586a7e6f","Type":"ContainerStarted","Data":"2909ed614d60dc31976f3585ee4c482157689a3dfd5c7fcce2758aee85e102e9"} Dec 06 07:30:52 crc kubenswrapper[4895]: I1206 07:30:52.065789 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04293385-9ad8-4686-a3d3-e39d586a7e6f","Type":"ContainerStarted","Data":"b91b7e983d4a1b355dc874f73fbaa1ce58e3e040d07274d2e14ec839fcb170dc"} Dec 06 07:30:53 crc kubenswrapper[4895]: I1206 07:30:53.067296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04293385-9ad8-4686-a3d3-e39d586a7e6f","Type":"ContainerStarted","Data":"b777a5a301fd4f77108bae41580433834a4fcb0d280446b3355223fe60884897"} Dec 06 07:30:53 crc kubenswrapper[4895]: I1206 07:30:53.092976 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=6.092954122 podStartE2EDuration="6.092954122s" podCreationTimestamp="2025-12-06 07:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:51.06386234 +0000 UTC m=+2013.465251220" watchObservedRunningTime="2025-12-06 07:30:53.092954122 +0000 UTC m=+2015.494342992" Dec 06 07:30:53 crc kubenswrapper[4895]: I1206 07:30:53.099392 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.099380513 podStartE2EDuration="4.099380513s" podCreationTimestamp="2025-12-06 07:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:30:53.090273782 +0000 UTC m=+2015.491662652" watchObservedRunningTime="2025-12-06 07:30:53.099380513 +0000 UTC m=+2015.500769383" Dec 06 07:30:53 crc kubenswrapper[4895]: I1206 07:30:53.307138 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 07:30:54 crc kubenswrapper[4895]: I1206 07:30:54.725736 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 07:30:54 crc kubenswrapper[4895]: I1206 07:30:54.726012 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 07:30:57 crc kubenswrapper[4895]: I1206 07:30:57.296712 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 07:30:57 crc kubenswrapper[4895]: I1206 07:30:57.297155 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 07:30:58 crc kubenswrapper[4895]: I1206 07:30:58.302151 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 07:30:58 crc kubenswrapper[4895]: I1206 07:30:58.309766 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 07:30:58 crc kubenswrapper[4895]: I1206 07:30:58.309839 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 07:30:58 crc kubenswrapper[4895]: I1206 07:30:58.337237 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 07:30:59 crc kubenswrapper[4895]: I1206 07:30:59.169375 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 07:30:59 crc kubenswrapper[4895]: I1206 07:30:59.696665 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:30:59 crc kubenswrapper[4895]: I1206 07:30:59.697010 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:30:59 crc kubenswrapper[4895]: I1206 07:30:59.726209 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 07:30:59 crc kubenswrapper[4895]: I1206 07:30:59.726300 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 07:31:00 crc kubenswrapper[4895]: I1206 07:31:00.740739 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:31:00 crc kubenswrapper[4895]: I1206 07:31:00.740749 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 07:31:05 crc kubenswrapper[4895]: I1206 07:31:05.953172 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 07:31:07 crc kubenswrapper[4895]: I1206 07:31:07.303886 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 07:31:07 crc kubenswrapper[4895]: I1206 07:31:07.304715 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 07:31:07 crc kubenswrapper[4895]: I1206 07:31:07.306590 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 07:31:07 crc kubenswrapper[4895]: I1206 07:31:07.331293 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 07:31:08 crc kubenswrapper[4895]: I1206 07:31:08.225840 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 07:31:08 crc kubenswrapper[4895]: I1206 07:31:08.236608 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 07:31:09 crc kubenswrapper[4895]: I1206 07:31:09.733310 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 07:31:09 crc kubenswrapper[4895]: I1206 07:31:09.739847 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 07:31:09 crc kubenswrapper[4895]: I1206 07:31:09.743979 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 07:31:10 crc kubenswrapper[4895]: I1206 07:31:10.250506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 07:31:29 crc kubenswrapper[4895]: I1206 07:31:29.696005 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:31:29 crc kubenswrapper[4895]: I1206 07:31:29.696634 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:31:29 crc kubenswrapper[4895]: I1206 07:31:29.976049 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 06 07:31:29 crc kubenswrapper[4895]: I1206 07:31:29.976586 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" containerName="openstackclient" containerID="cri-o://592289ccf63d36f99de0c41ad4893019feb1c8238b8005599a976ecd7e6fd991" gracePeriod=2 Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.003535 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.244418 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:31:30 crc kubenswrapper[4895]: E1206 07:31:30.304395 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:30 crc kubenswrapper[4895]: E1206 07:31:30.304502 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data podName:e963d73b-d3f2-4c70-8dbd-687b3fc1962d nodeName:}" failed. No retries permitted until 2025-12-06 07:31:30.804460961 +0000 UTC m=+2053.205849821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data") pod "rabbitmq-cell1-server-0" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d") : configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.429005 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.429443 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="openstack-network-exporter" containerID="cri-o://5e79ef9fb346fc367bb36aff8fa7a7e5ecd4ed177678c52a5893ae9529169abe" gracePeriod=300 Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.516652 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="ovsdbserver-sb" containerID="cri-o://84e285fd54592923b82baf0ba0638e2645a70b6ae64c423fef3c15fbd472c99c" gracePeriod=300 Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.588566 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinderdb70-account-delete-lqhcq"] Dec 06 07:31:30 crc kubenswrapper[4895]: E1206 07:31:30.589027 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" containerName="openstackclient" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.589040 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" containerName="openstackclient" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.589268 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" containerName="openstackclient" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.590124 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.649341 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderdb70-account-delete-lqhcq"] Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.712084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22fx\" (UniqueName: \"kubernetes.io/projected/faf28c62-87dc-461a-bf5a-4ae13d62e489-kube-api-access-c22fx\") pod \"cinderdb70-account-delete-lqhcq\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.712299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts\") pod \"cinderdb70-account-delete-lqhcq\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.800578 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.804660 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" probeResult="failure" output="command timed out" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.817584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts\") pod \"cinderdb70-account-delete-lqhcq\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.817700 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c22fx\" (UniqueName: \"kubernetes.io/projected/faf28c62-87dc-461a-bf5a-4ae13d62e489-kube-api-access-c22fx\") pod \"cinderdb70-account-delete-lqhcq\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: E1206 07:31:30.817927 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:30 crc kubenswrapper[4895]: E1206 07:31:30.817987 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data podName:e963d73b-d3f2-4c70-8dbd-687b3fc1962d nodeName:}" failed. No retries permitted until 2025-12-06 07:31:31.817968897 +0000 UTC m=+2054.219357767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data") pod "rabbitmq-cell1-server-0" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d") : configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.818743 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts\") pod \"cinderdb70-account-delete-lqhcq\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.975446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c22fx\" (UniqueName: \"kubernetes.io/projected/faf28c62-87dc-461a-bf5a-4ae13d62e489-kube-api-access-c22fx\") pod \"cinderdb70-account-delete-lqhcq\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.989917 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.990203 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" containerID="cri-o://c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" gracePeriod=30 Dec 06 07:31:30 crc kubenswrapper[4895]: I1206 07:31:30.990800 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="openstack-network-exporter" containerID="cri-o://2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632" gracePeriod=30 Dec 06 07:31:31 crc kubenswrapper[4895]: E1206 07:31:31.028630 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 06 07:31:31 crc kubenswrapper[4895]: E1206 07:31:31.028697 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data podName:8fa39160-bfb2-49ae-b2ca-12c0e5788996 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:31.528679393 +0000 UTC m=+2053.930068263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data") pod "rabbitmq-server-0" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996") : configmap "rabbitmq-config-data" not found Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.058324 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutronc074-account-delete-5dkk6"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.060209 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.075863 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronc074-account-delete-5dkk6"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.129849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6gtx\" (UniqueName: \"kubernetes.io/projected/a103ad6f-b726-4ad6-9aec-a689a74a4304-kube-api-access-p6gtx\") pod \"neutronc074-account-delete-5dkk6\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.130322 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts\") pod \"neutronc074-account-delete-5dkk6\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.182571 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dq2gw"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.210170 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dq2gw"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.239507 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts\") pod \"neutronc074-account-delete-5dkk6\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.239746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6gtx\" (UniqueName: \"kubernetes.io/projected/a103ad6f-b726-4ad6-9aec-a689a74a4304-kube-api-access-p6gtx\") pod \"neutronc074-account-delete-5dkk6\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.242754 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qnbdj"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.251454 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.252936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts\") pod \"neutronc074-account-delete-5dkk6\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.294548 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement00ad-account-delete-6xhps"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.296259 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.379527 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts\") pod \"placement00ad-account-delete-6xhps\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.379823 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8cr\" (UniqueName: \"kubernetes.io/projected/b6fc6ccb-af32-472e-b0f5-11cb224b4885-kube-api-access-sn8cr\") pod \"placement00ad-account-delete-6xhps\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.489866 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts\") pod \"placement00ad-account-delete-6xhps\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.490006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8cr\" (UniqueName: \"kubernetes.io/projected/b6fc6ccb-af32-472e-b0f5-11cb224b4885-kube-api-access-sn8cr\") pod \"placement00ad-account-delete-6xhps\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.491585 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts\") pod \"placement00ad-account-delete-6xhps\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.524576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6gtx\" (UniqueName: \"kubernetes.io/projected/a103ad6f-b726-4ad6-9aec-a689a74a4304-kube-api-access-p6gtx\") pod \"neutronc074-account-delete-5dkk6\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.532884 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cwrlp"] Dec 06 07:31:31 crc kubenswrapper[4895]: E1206 07:31:31.595518 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 06 07:31:31 crc kubenswrapper[4895]: E1206 07:31:31.595586 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data podName:8fa39160-bfb2-49ae-b2ca-12c0e5788996 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:32.595570647 +0000 UTC m=+2054.996959517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data") pod "rabbitmq-server-0" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996") : configmap "rabbitmq-config-data" not found Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.651199 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8cr\" (UniqueName: \"kubernetes.io/projected/b6fc6ccb-af32-472e-b0f5-11cb224b4885-kube-api-access-sn8cr\") pod \"placement00ad-account-delete-6xhps\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.697619 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-mpfpb"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.697919 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-mpfpb" podUID="d54fb90e-29d7-4df9-b09f-bd992972dc88" containerName="openstack-network-exporter" containerID="cri-o://b6a7604abefe1444b026209f2779ebe71e0863f38495e270864aa2d5eaa61e31" gracePeriod=30 Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.710971 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.768288 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement00ad-account-delete-6xhps"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.770142 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.791619 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4cfbl"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.807535 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4cfbl"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.831569 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican18b2-account-delete-zdlkh"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.833261 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.874572 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican18b2-account-delete-zdlkh"] Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.924233 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbckh\" (UniqueName: \"kubernetes.io/projected/d5401d7f-627c-410f-ae61-d7653749a7d3-kube-api-access-rbckh\") pod \"barbican18b2-account-delete-zdlkh\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.924344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts\") pod \"barbican18b2-account-delete-zdlkh\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:31 crc kubenswrapper[4895]: E1206 07:31:31.928899 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:31 crc kubenswrapper[4895]: E1206 07:31:31.929003 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data podName:e963d73b-d3f2-4c70-8dbd-687b3fc1962d nodeName:}" failed. No retries permitted until 2025-12-06 07:31:33.928978951 +0000 UTC m=+2056.330367821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data") pod "rabbitmq-cell1-server-0" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d") : configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:31 crc kubenswrapper[4895]: I1206 07:31:31.979852 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w9qg6"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.008649 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w9qg6"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.025228 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a617d6b4-721c-4087-bc16-70bcb58b9c69/ovsdbserver-sb/0.log" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.025311 4895 generic.go:334] "Generic (PLEG): container finished" podID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerID="5e79ef9fb346fc367bb36aff8fa7a7e5ecd4ed177678c52a5893ae9529169abe" exitCode=2 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.025335 4895 generic.go:334] "Generic (PLEG): container finished" podID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerID="84e285fd54592923b82baf0ba0638e2645a70b6ae64c423fef3c15fbd472c99c" exitCode=143 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.025367 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dtbpj"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.025398 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a617d6b4-721c-4087-bc16-70bcb58b9c69","Type":"ContainerDied","Data":"5e79ef9fb346fc367bb36aff8fa7a7e5ecd4ed177678c52a5893ae9529169abe"} Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.025422 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a617d6b4-721c-4087-bc16-70bcb58b9c69","Type":"ContainerDied","Data":"84e285fd54592923b82baf0ba0638e2645a70b6ae64c423fef3c15fbd472c99c"} Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.028465 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbckh\" (UniqueName: \"kubernetes.io/projected/d5401d7f-627c-410f-ae61-d7653749a7d3-kube-api-access-rbckh\") pod \"barbican18b2-account-delete-zdlkh\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.028575 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts\") pod \"barbican18b2-account-delete-zdlkh\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.044924 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts\") pod \"barbican18b2-account-delete-zdlkh\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.049286 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.050678 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="openstack-network-exporter" containerID="cri-o://156f7c175f2582a5ecac54004c4a02a79357a47a8e30e1f567ebf0c9892821d7" gracePeriod=300 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.054895 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder-api-0" secret="" err="secret \"cinder-cinder-dockercfg-cr22b\" not found" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.166407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbckh\" (UniqueName: \"kubernetes.io/projected/d5401d7f-627c-410f-ae61-d7653749a7d3-kube-api-access-rbckh\") pod \"barbican18b2-account-delete-zdlkh\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.253216 4895 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.253301 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:32.753276782 +0000 UTC m=+2055.154665652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.253891 4895 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.253924 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:32.753913089 +0000 UTC m=+2055.155301969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-scripts" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.253980 4895 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.254008 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:32.753999532 +0000 UTC m=+2055.155388402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-api-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.255196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.383089 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a8313b-5768-450d-bf40-3a3197e9b03f" path="/var/lib/kubelet/pods/85a8313b-5768-450d-bf40-3a3197e9b03f/volumes" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.406245 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b37f8f-5d15-4d1b-aab9-c4852295dcd4" path="/var/lib/kubelet/pods/f4b37f8f-5d15-4d1b-aab9-c4852295dcd4/volumes" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.407582 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6a838b-c173-4455-b8b5-b152aeee463a" path="/var/lib/kubelet/pods/fb6a838b-c173-4455-b8b5-b152aeee463a/volumes" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.408238 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance55c9-account-delete-4b8gl"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418697 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dtbpj"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418785 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kx92p"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418804 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kx92p"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418857 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418877 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418895 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.418949 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8ntsw"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.419191 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-8ntsw"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.420363 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" containerID="cri-o://81a5ab0803db27cf4248b24bac25718805b76eff190f565fd41b120d159881aa" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.420655 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="probe" containerID="cri-o://f8e6a3efd3e56d84034e7d038c15ea6608ff8bc466f71507822f323746904eb8" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.420955 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.427965 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-server" containerID="cri-o://e031534957b9fccda0363a790f71a039ee246b5fbf68723177270eb631a9658b" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428179 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-reaper" containerID="cri-o://016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428238 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-auditor" containerID="cri-o://6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428290 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-replicator" containerID="cri-o://b330db62e15f40daaac157cd4c49b8c144883337b31335984bf5592ae231a59c" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428337 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-server" containerID="cri-o://2951f946e28728b5afab411ba269775aef6e893b3da37ae09e4c6ad9b2e2cd1d" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428389 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="swift-recon-cron" containerID="cri-o://180fd4a822736c6399b31cb1a67003fc90408e00dbbaac62ec926a3d268825ec" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428432 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="rsync" containerID="cri-o://12eb91bc2a51f4766807671be1ba08375fb07f1bfcf2b4debe6b359d5cf1ad3a" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428490 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-expirer" containerID="cri-o://468c36ddfa04c0375e91b38b9d03a4849fff2aae471e8fb65d8b36405a987438" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428544 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-updater" containerID="cri-o://fa9d83bbd1bfb2f7ebd0b8374526974f2e013372bc11e48787e237b85890d529" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428601 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-auditor" containerID="cri-o://9cbd0224b85c8a430882c76ef6c4ea96f23027717b65a3fd1800ebeee11b9ea6" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428654 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-replicator" containerID="cri-o://a45b30a9ac61253c7662e8944033d9348f16b13301f55a8a8a2040cd78bdd894" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428709 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-server" containerID="cri-o://95dadf2ac42ccfd2a7bccc9ed9a272bfdd8c736e08c731df6f9d73b086d6a880" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428759 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-updater" containerID="cri-o://5390e87e60eff5498d3563e5dccff27ede47a6a293471f0a7d9c2ca23354855c" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428809 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-auditor" containerID="cri-o://3ef3301fb5b94d56ebbbf77fe821db08595a72ce6dc8b57263d0355011539f31" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.428863 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-replicator" containerID="cri-o://d88fdff0da3bb24a30ae1253952bff8962b2fd7e5173dd829fee80c77dc2670f" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.455554 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-676f67bc8f-srz2n"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.455920 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-676f67bc8f-srz2n" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-api" containerID="cri-o://c164dcb786933c905e3f3e8351f17e2bb2512e11081c2453a5584c61dbfedabc" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.456104 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-676f67bc8f-srz2n" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-httpd" containerID="cri-o://4f8e9ae1388bc7994b5365380a4bd5e84d80b90cafe1780718a2555d9c3d7e69" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.468383 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmb2\" (UniqueName: \"kubernetes.io/projected/7d199b21-7519-4bbb-adac-07ad0b1e21d9-kube-api-access-9bmb2\") pod \"glance55c9-account-delete-4b8gl\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.468946 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts\") pod \"glance55c9-account-delete-4b8gl\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.488033 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.488299 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="dnsmasq-dns" containerID="cri-o://30e02251ada110b0211ea1fdcacbceb80e38625540fb188e0c03ef63b318da58" gracePeriod=10 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.518252 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance55c9-account-delete-4b8gl"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.555789 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f74647bf4-9dcq9"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.556117 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f74647bf4-9dcq9" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-log" containerID="cri-o://63e32ce9cc5b3c386ddc984f3d5e3485a878384f7b60dac8c6d438b52980f3be" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.556714 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f74647bf4-9dcq9" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-api" containerID="cri-o://8673bbafab775ecc7fd8889b8d5591b54abb229821a4e6d6b5e7d9f3b443717d" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.588389 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts\") pod \"glance55c9-account-delete-4b8gl\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.588498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmb2\" (UniqueName: \"kubernetes.io/projected/7d199b21-7519-4bbb-adac-07ad0b1e21d9-kube-api-access-9bmb2\") pod \"glance55c9-account-delete-4b8gl\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.589981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts\") pod \"glance55c9-account-delete-4b8gl\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.642693 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pgrzs"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.655852 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmb2\" (UniqueName: \"kubernetes.io/projected/7d199b21-7519-4bbb-adac-07ad0b1e21d9-kube-api-access-9bmb2\") pod \"glance55c9-account-delete-4b8gl\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.668641 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pgrzs"] Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.684167 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.702545 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapif7f2-account-delete-zxqsn"] Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.704220 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.704306 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data podName:8fa39160-bfb2-49ae-b2ca-12c0e5788996 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:34.704286458 +0000 UTC m=+2057.105675328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data") pod "rabbitmq-server-0" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996") : configmap "rabbitmq-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.704518 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.723205 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.756857 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.758198 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-twdnd"] Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.787655 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.787861 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.852105 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="rabbitmq" containerID="cri-o://86a69fa460c2d02239e4fcca0e82a4cac5dc6968a1de1b5762253021bb623d96" gracePeriod=604800 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.861968 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-twdnd"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.919897 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="ovsdbserver-nb" containerID="cri-o://214631b61aad2658bf3fd72e9c59668fc147ff6e83cf726ca8f32206cfcc972a" gracePeriod=300 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.943053 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.944406 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-log" containerID="cri-o://c0d8057c614bf57165265f6274705c1b72d6138ba82d542917a4bf68a493d896" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.945338 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-httpd" containerID="cri-o://5c1298d7ec7ec06c2816c3c5d51d11ddd9f42ecf5beced52e8e430776e865dc9" gracePeriod=30 Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.969648 4895 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.969739 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:33.969719337 +0000 UTC m=+2056.371108207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-scripts" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.970111 4895 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.970136 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:33.970128188 +0000 UTC m=+2056.371517058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.970181 4895 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: E1206 07:31:32.970200 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:33.97019431 +0000 UTC m=+2056.371583180 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-api-config-data" not found Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.971027 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell01ea0-account-delete-2sg72"] Dec 06 07:31:32 crc kubenswrapper[4895]: I1206 07:31:32.975017 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.006971 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.009326 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.203:5353: connect: connection refused" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.018931 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.019206 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-log" containerID="cri-o://396c5517a5377de34e58194ec2e688e2eb5546de17a7216da1f043b7e210861c" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.019357 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-httpd" containerID="cri-o://5f86f88a0048b09a72677b652eb94fd21ab7d1447989850d3c2a784d667b1b12" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.077212 4895 generic.go:334] "Generic (PLEG): container finished" podID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerID="30e02251ada110b0211ea1fdcacbceb80e38625540fb188e0c03ef63b318da58" exitCode=0 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.077303 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" event={"ID":"f3e26c07-9fe5-4b2b-b7bc-1b008e904792","Type":"ContainerDied","Data":"30e02251ada110b0211ea1fdcacbceb80e38625540fb188e0c03ef63b318da58"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.078413 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k54cs\" (UniqueName: \"kubernetes.io/projected/5b854481-cd2a-4938-8b82-3288191b5bbe-kube-api-access-k54cs\") pod \"novaapif7f2-account-delete-zxqsn\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.078523 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts\") pod \"novacell01ea0-account-delete-2sg72\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.078568 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts\") pod \"novaapif7f2-account-delete-zxqsn\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.078619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hbt\" (UniqueName: \"kubernetes.io/projected/585dce37-3558-4a0c-8dfb-108c94c1047c-kube-api-access-l6hbt\") pod \"novacell01ea0-account-delete-2sg72\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.081454 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="rabbitmq" containerID="cri-o://c65a6f976e5fa9af5eaf596cfeeff15bdcc75ba1010c01bacc5174bb022b9e1c" gracePeriod=604800 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.092803 4895 generic.go:334] "Generic (PLEG): container finished" podID="a2e55858-e444-489b-b573-aae00aa71f9b" containerID="63e32ce9cc5b3c386ddc984f3d5e3485a878384f7b60dac8c6d438b52980f3be" exitCode=143 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.092867 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f74647bf4-9dcq9" event={"ID":"a2e55858-e444-489b-b573-aae00aa71f9b","Type":"ContainerDied","Data":"63e32ce9cc5b3c386ddc984f3d5e3485a878384f7b60dac8c6d438b52980f3be"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.108928 4895 generic.go:334] "Generic (PLEG): container finished" podID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerID="2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632" exitCode=2 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.108989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cccbd9be-50fa-413b-bb47-1af68ecdda2d","Type":"ContainerDied","Data":"2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.115521 4895 generic.go:334] "Generic (PLEG): container finished" podID="cf257472-30d7-4719-9b36-47c30f1db7ec" containerID="592289ccf63d36f99de0c41ad4893019feb1c8238b8005599a976ecd7e6fd991" exitCode=137 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.121458 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerID="156f7c175f2582a5ecac54004c4a02a79357a47a8e30e1f567ebf0c9892821d7" exitCode=2 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.121558 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec28d57e-8ecf-4415-b18f-69bfa0514187","Type":"ContainerDied","Data":"156f7c175f2582a5ecac54004c4a02a79357a47a8e30e1f567ebf0c9892821d7"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.129649 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-584b845d6f-78cbh"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.129930 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-584b845d6f-78cbh" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-httpd" containerID="cri-o://01b7e75885151f8a28d8cb5829b199e20877666375bdf87e612aef852c6f6c46" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.130129 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-584b845d6f-78cbh" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-server" containerID="cri-o://38bc1a720f1bec95645eb9c53e763a74280470a9f127df6f30b1613abb5aaad5" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.135680 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.140784 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mpfpb_d54fb90e-29d7-4df9-b09f-bd992972dc88/openstack-network-exporter/0.log" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.140842 4895 generic.go:334] "Generic (PLEG): container finished" podID="d54fb90e-29d7-4df9-b09f-bd992972dc88" containerID="b6a7604abefe1444b026209f2779ebe71e0863f38495e270864aa2d5eaa61e31" exitCode=2 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.140963 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell01ea0-account-delete-2sg72"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.140997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mpfpb" event={"ID":"d54fb90e-29d7-4df9-b09f-bd992972dc88","Type":"ContainerDied","Data":"b6a7604abefe1444b026209f2779ebe71e0863f38495e270864aa2d5eaa61e31"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.157845 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.158171 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-log" containerID="cri-o://1c15337273095c402baf16f57c52cac5a3445669c2517cbedf79ba80a4d78a94" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.158562 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-api" containerID="cri-o://b540a38897b5cccc650243ffed7745515a30b25bc0e32302ca392cf373ef7ac7" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.175914 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-859c997494-lcf5z"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.176326 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener-log" containerID="cri-o://f772918d22085feff7f986c973ad071936dcff2e69c1f8d298ba9eba11341e18" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.176423 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener" containerID="cri-o://9eef442bb2d8f458e303a110fade0f3f49397f1f834b6ae2ba2186b103907a9d" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.180675 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hbt\" (UniqueName: \"kubernetes.io/projected/585dce37-3558-4a0c-8dfb-108c94c1047c-kube-api-access-l6hbt\") pod \"novacell01ea0-account-delete-2sg72\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.180978 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k54cs\" (UniqueName: \"kubernetes.io/projected/5b854481-cd2a-4938-8b82-3288191b5bbe-kube-api-access-k54cs\") pod \"novaapif7f2-account-delete-zxqsn\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.181139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts\") pod \"novacell01ea0-account-delete-2sg72\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.181224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts\") pod \"novaapif7f2-account-delete-zxqsn\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.182306 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts\") pod \"novacell01ea0-account-delete-2sg72\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.201802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts\") pod \"novaapif7f2-account-delete-zxqsn\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.215550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k54cs\" (UniqueName: \"kubernetes.io/projected/5b854481-cd2a-4938-8b82-3288191b5bbe-kube-api-access-k54cs\") pod \"novaapif7f2-account-delete-zxqsn\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.216208 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hbt\" (UniqueName: \"kubernetes.io/projected/585dce37-3558-4a0c-8dfb-108c94c1047c-kube-api-access-l6hbt\") pod \"novacell01ea0-account-delete-2sg72\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.231545 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55c5754c9b-2g2cg"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.231918 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55c5754c9b-2g2cg" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api-log" containerID="cri-o://f6e1772fc1e3b4a1e20cfa0b70cf73a68a084a2daea75a81ee655fc0fe7abc9e" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.232537 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55c5754c9b-2g2cg" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api" containerID="cri-o://22b5d520fb5b105ce641543b32b2d0e8817cc0873901f3804e0a3eecf575919f" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234083 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="468c36ddfa04c0375e91b38b9d03a4849fff2aae471e8fb65d8b36405a987438" exitCode=0 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234130 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="9cbd0224b85c8a430882c76ef6c4ea96f23027717b65a3fd1800ebeee11b9ea6" exitCode=0 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234140 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="5390e87e60eff5498d3563e5dccff27ede47a6a293471f0a7d9c2ca23354855c" exitCode=0 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234149 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="d88fdff0da3bb24a30ae1253952bff8962b2fd7e5173dd829fee80c77dc2670f" exitCode=0 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234194 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"468c36ddfa04c0375e91b38b9d03a4849fff2aae471e8fb65d8b36405a987438"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234246 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"9cbd0224b85c8a430882c76ef6c4ea96f23027717b65a3fd1800ebeee11b9ea6"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234259 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"5390e87e60eff5498d3563e5dccff27ede47a6a293471f0a7d9c2ca23354855c"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"d88fdff0da3bb24a30ae1253952bff8962b2fd7e5173dd829fee80c77dc2670f"} Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234382 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api-log" containerID="cri-o://87e10027541f9994e9a662ec95d1dc56f5cd26543c9ae06d62f955e26582390b" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.234533 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api" containerID="cri-o://32263a683bb152fcade8d6ea711b2647b1cdd056c745a933576b1f81801ceca7" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.272514 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.297440 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.300528 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-698445c967-xk6g2"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.300787 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-698445c967-xk6g2" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker-log" containerID="cri-o://9df6c3348da2931d78ecc446fbd4746175039dc755b08b1089a9a7913d9218b1" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.301231 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-698445c967-xk6g2" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker" containerID="cri-o://b02063e5faf708de34c2034eb08a56c2da17f968cfa6ea952d0d97dba87a4a30" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.361070 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.361378 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" containerID="cri-o://858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.402263 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapif7f2-account-delete-zxqsn"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.446782 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.478215 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bz7dn"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.498952 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bz7dn"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.536809 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8afe-account-create-update-6m7xh"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.542928 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8afe-account-create-update-6m7xh"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.553846 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.554037 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerName="nova-cell1-conductor-conductor" containerID="cri-o://ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.570540 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.570808 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" containerID="cri-o://095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.585131 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hmw66"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.593129 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hmw66"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.604252 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zt8v2"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.614830 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.615058 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" containerID="cri-o://b91b7e983d4a1b355dc874f73fbaa1ce58e3e040d07274d2e14ec839fcb170dc" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.615520 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" containerID="cri-o://b777a5a301fd4f77108bae41580433834a4fcb0d280446b3355223fe60884897" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.623587 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zt8v2"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.641540 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.641761 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="549e9969-79a0-45d9-a093-0b58ad1bc359" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://085b9f1f4088bb302b9ed0d8a72f3e2d171c7b741007adb8f5288a5c1833bb96" gracePeriod=30 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.665861 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderdb70-account-delete-lqhcq"] Dec 06 07:31:33 crc kubenswrapper[4895]: E1206 07:31:33.736000 4895 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 06 07:31:33 crc kubenswrapper[4895]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 06 07:31:33 crc kubenswrapper[4895]: + source /usr/local/bin/container-scripts/functions Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNBridge=br-int Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNRemote=tcp:localhost:6642 Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNEncapType=geneve Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNAvailabilityZones= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ EnableChassisAsGateway=true Dec 06 07:31:33 crc kubenswrapper[4895]: ++ PhysicalNetworks= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNHostName= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 06 07:31:33 crc kubenswrapper[4895]: ++ ovs_dir=/var/lib/openvswitch Dec 06 07:31:33 crc kubenswrapper[4895]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 06 07:31:33 crc kubenswrapper[4895]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 06 07:31:33 crc kubenswrapper[4895]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + cleanup_ovsdb_server_semaphore Dec 06 07:31:33 crc kubenswrapper[4895]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 06 07:31:33 crc kubenswrapper[4895]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 06 07:31:33 crc kubenswrapper[4895]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-qnbdj" message=< Dec 06 07:31:33 crc kubenswrapper[4895]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 06 07:31:33 crc kubenswrapper[4895]: + source /usr/local/bin/container-scripts/functions Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNBridge=br-int Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNRemote=tcp:localhost:6642 Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNEncapType=geneve Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNAvailabilityZones= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ EnableChassisAsGateway=true Dec 06 07:31:33 crc kubenswrapper[4895]: ++ PhysicalNetworks= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNHostName= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 06 07:31:33 crc kubenswrapper[4895]: ++ ovs_dir=/var/lib/openvswitch Dec 06 07:31:33 crc kubenswrapper[4895]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 06 07:31:33 crc kubenswrapper[4895]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 06 07:31:33 crc kubenswrapper[4895]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + cleanup_ovsdb_server_semaphore Dec 06 07:31:33 crc kubenswrapper[4895]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 06 07:31:33 crc kubenswrapper[4895]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 06 07:31:33 crc kubenswrapper[4895]: > Dec 06 07:31:33 crc kubenswrapper[4895]: E1206 07:31:33.736055 4895 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 06 07:31:33 crc kubenswrapper[4895]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 06 07:31:33 crc kubenswrapper[4895]: + source /usr/local/bin/container-scripts/functions Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNBridge=br-int Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNRemote=tcp:localhost:6642 Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNEncapType=geneve Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNAvailabilityZones= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ EnableChassisAsGateway=true Dec 06 07:31:33 crc kubenswrapper[4895]: ++ PhysicalNetworks= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ OVNHostName= Dec 06 07:31:33 crc kubenswrapper[4895]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 06 07:31:33 crc kubenswrapper[4895]: ++ ovs_dir=/var/lib/openvswitch Dec 06 07:31:33 crc kubenswrapper[4895]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 06 07:31:33 crc kubenswrapper[4895]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 06 07:31:33 crc kubenswrapper[4895]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + sleep 0.5 Dec 06 07:31:33 crc kubenswrapper[4895]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 06 07:31:33 crc kubenswrapper[4895]: + cleanup_ovsdb_server_semaphore Dec 06 07:31:33 crc kubenswrapper[4895]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 06 07:31:33 crc kubenswrapper[4895]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 06 07:31:33 crc kubenswrapper[4895]: > pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" containerID="cri-o://84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.736112 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" containerID="cri-o://84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" gracePeriod=28 Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.790847 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="549e9969-79a0-45d9-a093-0b58ad1bc359" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.202:6080/vnc_lite.html\": dial tcp 10.217.0.202:6080: connect: connection refused" Dec 06 07:31:33 crc kubenswrapper[4895]: I1206 07:31:33.878232 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" containerID="cri-o://6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" gracePeriod=28 Dec 06 07:31:33 crc kubenswrapper[4895]: E1206 07:31:33.974319 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:33 crc kubenswrapper[4895]: E1206 07:31:33.997356 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.011586 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.011652 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.062546 4895 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.063164 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:36.063148363 +0000 UTC m=+2058.464537233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-scripts" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.063329 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.063406 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data podName:e963d73b-d3f2-4c70-8dbd-687b3fc1962d nodeName:}" failed. No retries permitted until 2025-12-06 07:31:38.063383779 +0000 UTC m=+2060.464772639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data") pod "rabbitmq-cell1-server-0" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d") : configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.063501 4895 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.063531 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:36.063524123 +0000 UTC m=+2058.464912993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-api-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.063947 4895 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.064007 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:36.063991566 +0000 UTC m=+2058.465380436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.107907 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04872869-17a2-4cb6-9222-3b265dddf350" path="/var/lib/kubelet/pods/04872869-17a2-4cb6-9222-3b265dddf350/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.109744 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b001b3-7a17-444d-8dd9-5e296f84770b" path="/var/lib/kubelet/pods/34b001b3-7a17-444d-8dd9-5e296f84770b/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.110819 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4337be07-550a-4421-917f-5969980e230d" path="/var/lib/kubelet/pods/4337be07-550a-4421-917f-5969980e230d/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.116649 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f394cd1-aa14-48fa-8643-30d896f0823e" path="/var/lib/kubelet/pods/4f394cd1-aa14-48fa-8643-30d896f0823e/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.117673 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d931e2-40e6-4bb5-8b4f-3252852effd0" path="/var/lib/kubelet/pods/51d931e2-40e6-4bb5-8b4f-3252852effd0/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.118524 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584b2782-7ea9-4697-8862-ae2090bc918c" path="/var/lib/kubelet/pods/584b2782-7ea9-4697-8862-ae2090bc918c/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.142749 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9351ace1-bec9-4251-866f-72d283f59ec3" path="/var/lib/kubelet/pods/9351ace1-bec9-4251-866f-72d283f59ec3/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.149085 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6f8ca1-f659-4ab6-86bf-c67f017c4166" path="/var/lib/kubelet/pods/9d6f8ca1-f659-4ab6-86bf-c67f017c4166/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.156704 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faea84b4-6c7f-4d6c-b42e-14ba117920d1" path="/var/lib/kubelet/pods/faea84b4-6c7f-4d6c-b42e-14ba117920d1/volumes" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.210221 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" containerID="cri-o://f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" gracePeriod=30 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.244615 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronc074-account-delete-5dkk6"] Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.288452 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a617d6b4-721c-4087-bc16-70bcb58b9c69/ovsdbserver-sb/0.log" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.288554 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.304375 4895 generic.go:334] "Generic (PLEG): container finished" podID="fa5d7561-2042-4dcc-8ddc-336475230720" containerID="396c5517a5377de34e58194ec2e688e2eb5546de17a7216da1f043b7e210861c" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.304509 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa5d7561-2042-4dcc-8ddc-336475230720","Type":"ContainerDied","Data":"396c5517a5377de34e58194ec2e688e2eb5546de17a7216da1f043b7e210861c"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.306680 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.333224 4895 generic.go:334] "Generic (PLEG): container finished" podID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerID="f772918d22085feff7f986c973ad071936dcff2e69c1f8d298ba9eba11341e18" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.333352 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" event={"ID":"0921ccd3-f346-46b9-88af-e165de8ff32b","Type":"ContainerDied","Data":"f772918d22085feff7f986c973ad071936dcff2e69c1f8d298ba9eba11341e18"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.333387 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement00ad-account-delete-6xhps"] Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.347265 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod275e5518_922b_455d_a5d5_7b072a12ab07.slice/crio-4f8e9ae1388bc7994b5365380a4bd5e84d80b90cafe1780718a2555d9c3d7e69.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-e031534957b9fccda0363a790f71a039ee246b5fbf68723177270eb631a9658b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-conmon-12eb91bc2a51f4766807671be1ba08375fb07f1bfcf2b4debe6b359d5cf1ad3a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod588e7e7b_f1fb_4e68_846a_04c6a23bec39.slice/crio-84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24216683_0ca8_44dd_8bfa_a7d0a84cf3cc.slice/crio-conmon-f6e1772fc1e3b4a1e20cfa0b70cf73a68a084a2daea75a81ee655fc0fe7abc9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-conmon-3ef3301fb5b94d56ebbbf77fe821db08595a72ce6dc8b57263d0355011539f31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-conmon-95dadf2ac42ccfd2a7bccc9ed9a272bfdd8c736e08c731df6f9d73b086d6a880.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09e95af5_a2ad_42ee_83a9_25cef915d0dc.slice/crio-conmon-01b7e75885151f8a28d8cb5829b199e20877666375bdf87e612aef852c6f6c46.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-conmon-2951f946e28728b5afab411ba269775aef6e893b3da37ae09e4c6ad9b2e2cd1d.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.358304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52d7b5bf-eae3-4832-b13b-be5f0734e4bb","Type":"ContainerDied","Data":"87e10027541f9994e9a662ec95d1dc56f5cd26543c9ae06d62f955e26582390b"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.358250 4895 generic.go:334] "Generic (PLEG): container finished" podID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerID="87e10027541f9994e9a662ec95d1dc56f5cd26543c9ae06d62f955e26582390b" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.363003 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican18b2-account-delete-zdlkh"] Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.422689 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423056 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423069 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="b330db62e15f40daaac157cd4c49b8c144883337b31335984bf5592ae231a59c" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423079 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="2951f946e28728b5afab411ba269775aef6e893b3da37ae09e4c6ad9b2e2cd1d" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423089 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="12eb91bc2a51f4766807671be1ba08375fb07f1bfcf2b4debe6b359d5cf1ad3a" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423123 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="fa9d83bbd1bfb2f7ebd0b8374526974f2e013372bc11e48787e237b85890d529" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423134 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="a45b30a9ac61253c7662e8944033d9348f16b13301f55a8a8a2040cd78bdd894" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423143 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="95dadf2ac42ccfd2a7bccc9ed9a272bfdd8c736e08c731df6f9d73b086d6a880" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423152 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="3ef3301fb5b94d56ebbbf77fe821db08595a72ce6dc8b57263d0355011539f31" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"b330db62e15f40daaac157cd4c49b8c144883337b31335984bf5592ae231a59c"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423332 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"2951f946e28728b5afab411ba269775aef6e893b3da37ae09e4c6ad9b2e2cd1d"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"12eb91bc2a51f4766807671be1ba08375fb07f1bfcf2b4debe6b359d5cf1ad3a"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"fa9d83bbd1bfb2f7ebd0b8374526974f2e013372bc11e48787e237b85890d529"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423400 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"a45b30a9ac61253c7662e8944033d9348f16b13301f55a8a8a2040cd78bdd894"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423412 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"95dadf2ac42ccfd2a7bccc9ed9a272bfdd8c736e08c731df6f9d73b086d6a880"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.423446 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"3ef3301fb5b94d56ebbbf77fe821db08595a72ce6dc8b57263d0355011539f31"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.425943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderdb70-account-delete-lqhcq" event={"ID":"faf28c62-87dc-461a-bf5a-4ae13d62e489","Type":"ContainerStarted","Data":"f9840e1f21720d6c72eeb1b347079b8a6d02c54c63afbf722c5c5534b7abd35d"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.448180 4895 generic.go:334] "Generic (PLEG): container finished" podID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerID="c0d8057c614bf57165265f6274705c1b72d6138ba82d542917a4bf68a493d896" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.448277 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5739e86-0fb8-4368-91ae-f2a09bb9848c","Type":"ContainerDied","Data":"c0d8057c614bf57165265f6274705c1b72d6138ba82d542917a4bf68a493d896"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.459371 4895 generic.go:334] "Generic (PLEG): container finished" podID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerID="9df6c3348da2931d78ecc446fbd4746175039dc755b08b1089a9a7913d9218b1" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.459455 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698445c967-xk6g2" event={"ID":"46664967-bc44-4dd5-8fa7-419d1f7741fd","Type":"ContainerDied","Data":"9df6c3348da2931d78ecc446fbd4746175039dc755b08b1089a9a7913d9218b1"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.480936 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican18b2-account-delete-zdlkh" event={"ID":"d5401d7f-627c-410f-ae61-d7653749a7d3","Type":"ContainerStarted","Data":"de67e65c38912fb61e33cf94871c7df1f8684a86a409bfd2de2ebde7a739852c"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.488351 4895 generic.go:334] "Generic (PLEG): container finished" podID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerID="f6e1772fc1e3b4a1e20cfa0b70cf73a68a084a2daea75a81ee655fc0fe7abc9e" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.488447 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c5754c9b-2g2cg" event={"ID":"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc","Type":"ContainerDied","Data":"f6e1772fc1e3b4a1e20cfa0b70cf73a68a084a2daea75a81ee655fc0fe7abc9e"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.490047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc074-account-delete-5dkk6" event={"ID":"a103ad6f-b726-4ad6-9aec-a689a74a4304","Type":"ContainerStarted","Data":"496eee51e515abd32fd065f53520f0f50ed06f94bd48098d3e1c6bbf036c1de0"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.493523 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-config\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.493636 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-metrics-certs-tls-certs\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.493735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrnxm\" (UniqueName: \"kubernetes.io/projected/cf257472-30d7-4719-9b36-47c30f1db7ec-kube-api-access-qrnxm\") pod \"cf257472-30d7-4719-9b36-47c30f1db7ec\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.493801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-combined-ca-bundle\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.493835 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdbserver-sb-tls-certs\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.493977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.494038 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnthg\" (UniqueName: \"kubernetes.io/projected/a617d6b4-721c-4087-bc16-70bcb58b9c69-kube-api-access-fnthg\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.494087 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdb-rundir\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.494114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-combined-ca-bundle\") pod \"cf257472-30d7-4719-9b36-47c30f1db7ec\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.494177 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config-secret\") pod \"cf257472-30d7-4719-9b36-47c30f1db7ec\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.494255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config\") pod \"cf257472-30d7-4719-9b36-47c30f1db7ec\" (UID: \"cf257472-30d7-4719-9b36-47c30f1db7ec\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.494368 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-scripts\") pod \"a617d6b4-721c-4087-bc16-70bcb58b9c69\" (UID: \"a617d6b4-721c-4087-bc16-70bcb58b9c69\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.496085 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.496366 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-scripts" (OuterVolumeSpecName: "scripts") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.496417 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-config" (OuterVolumeSpecName: "config") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.511410 4895 generic.go:334] "Generic (PLEG): container finished" podID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.511679 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerDied","Data":"84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.518689 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.518784 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerID="1c15337273095c402baf16f57c52cac5a3445669c2517cbedf79ba80a4d78a94" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.518870 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124","Type":"ContainerDied","Data":"1c15337273095c402baf16f57c52cac5a3445669c2517cbedf79ba80a4d78a94"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.523986 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf257472-30d7-4719-9b36-47c30f1db7ec-kube-api-access-qrnxm" (OuterVolumeSpecName: "kube-api-access-qrnxm") pod "cf257472-30d7-4719-9b36-47c30f1db7ec" (UID: "cf257472-30d7-4719-9b36-47c30f1db7ec"). InnerVolumeSpecName "kube-api-access-qrnxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.524228 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00ad-account-delete-6xhps" event={"ID":"b6fc6ccb-af32-472e-b0f5-11cb224b4885","Type":"ContainerStarted","Data":"0e124448b5f0eed5933c837c60f0c79ffc31bec312f35fa6f6337c0587e199ba"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.526136 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a617d6b4-721c-4087-bc16-70bcb58b9c69-kube-api-access-fnthg" (OuterVolumeSpecName: "kube-api-access-fnthg") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "kube-api-access-fnthg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.543115 4895 generic.go:334] "Generic (PLEG): container finished" podID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerID="01b7e75885151f8a28d8cb5829b199e20877666375bdf87e612aef852c6f6c46" exitCode=0 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.543177 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-584b845d6f-78cbh" event={"ID":"09e95af5-a2ad-42ee-83a9-25cef915d0dc","Type":"ContainerDied","Data":"01b7e75885151f8a28d8cb5829b199e20877666375bdf87e612aef852c6f6c46"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.543482 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mpfpb_d54fb90e-29d7-4df9-b09f-bd992972dc88/openstack-network-exporter/0.log" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.543535 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.548992 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.553801 4895 scope.go:117] "RemoveContainer" containerID="592289ccf63d36f99de0c41ad4893019feb1c8238b8005599a976ecd7e6fd991" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.553898 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf257472-30d7-4719-9b36-47c30f1db7ec" (UID: "cf257472-30d7-4719-9b36-47c30f1db7ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.553981 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.561259 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.562440 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cf257472-30d7-4719-9b36-47c30f1db7ec" (UID: "cf257472-30d7-4719-9b36-47c30f1db7ec"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.570856 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.570925 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerName="nova-cell1-conductor-conductor" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609493 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnthg\" (UniqueName: \"kubernetes.io/projected/a617d6b4-721c-4087-bc16-70bcb58b9c69-kube-api-access-fnthg\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609541 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609553 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609565 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609576 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609585 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a617d6b4-721c-4087-bc16-70bcb58b9c69-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609594 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrnxm\" (UniqueName: \"kubernetes.io/projected/cf257472-30d7-4719-9b36-47c30f1db7ec-kube-api-access-qrnxm\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.609622 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.622900 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec28d57e-8ecf-4415-b18f-69bfa0514187/ovsdbserver-nb/0.log" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.623356 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerID="214631b61aad2658bf3fd72e9c59668fc147ff6e83cf726ca8f32206cfcc972a" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.623428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec28d57e-8ecf-4415-b18f-69bfa0514187","Type":"ContainerDied","Data":"214631b61aad2658bf3fd72e9c59668fc147ff6e83cf726ca8f32206cfcc972a"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.623661 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.632545 4895 generic.go:334] "Generic (PLEG): container finished" podID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerID="b91b7e983d4a1b355dc874f73fbaa1ce58e3e040d07274d2e14ec839fcb170dc" exitCode=143 Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.632643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04293385-9ad8-4686-a3d3-e39d586a7e6f","Type":"ContainerDied","Data":"b91b7e983d4a1b355dc874f73fbaa1ce58e3e040d07274d2e14ec839fcb170dc"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.636643 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a617d6b4-721c-4087-bc16-70bcb58b9c69/ovsdbserver-sb/0.log" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.636704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a617d6b4-721c-4087-bc16-70bcb58b9c69","Type":"ContainerDied","Data":"4397b88d2b64bc2761866ba342fa4bfe20c8e256a4f354bc51bf1798cb2ed16b"} Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.636810 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.645865 4895 scope.go:117] "RemoveContainer" containerID="5e79ef9fb346fc367bb36aff8fa7a7e5ecd4ed177678c52a5893ae9529169abe" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.665197 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance55c9-account-delete-4b8gl"] Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.710357 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkzwp\" (UniqueName: \"kubernetes.io/projected/d54fb90e-29d7-4df9-b09f-bd992972dc88-kube-api-access-jkzwp\") pod \"d54fb90e-29d7-4df9-b09f-bd992972dc88\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.710397 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovs-rundir\") pod \"d54fb90e-29d7-4df9-b09f-bd992972dc88\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.710491 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-metrics-certs-tls-certs\") pod \"d54fb90e-29d7-4df9-b09f-bd992972dc88\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.710666 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-combined-ca-bundle\") pod \"d54fb90e-29d7-4df9-b09f-bd992972dc88\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.710772 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovn-rundir\") pod \"d54fb90e-29d7-4df9-b09f-bd992972dc88\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.710795 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54fb90e-29d7-4df9-b09f-bd992972dc88-config\") pod \"d54fb90e-29d7-4df9-b09f-bd992972dc88\" (UID: \"d54fb90e-29d7-4df9-b09f-bd992972dc88\") " Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.711462 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: E1206 07:31:34.711541 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data podName:8fa39160-bfb2-49ae-b2ca-12c0e5788996 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:38.711524831 +0000 UTC m=+2061.112913701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data") pod "rabbitmq-server-0" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996") : configmap "rabbitmq-config-data" not found Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.712464 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "d54fb90e-29d7-4df9-b09f-bd992972dc88" (UID: "d54fb90e-29d7-4df9-b09f-bd992972dc88"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.714700 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "d54fb90e-29d7-4df9-b09f-bd992972dc88" (UID: "d54fb90e-29d7-4df9-b09f-bd992972dc88"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.715245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54fb90e-29d7-4df9-b09f-bd992972dc88-config" (OuterVolumeSpecName: "config") pod "d54fb90e-29d7-4df9-b09f-bd992972dc88" (UID: "d54fb90e-29d7-4df9-b09f-bd992972dc88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.730052 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54fb90e-29d7-4df9-b09f-bd992972dc88-kube-api-access-jkzwp" (OuterVolumeSpecName: "kube-api-access-jkzwp") pod "d54fb90e-29d7-4df9-b09f-bd992972dc88" (UID: "d54fb90e-29d7-4df9-b09f-bd992972dc88"). InnerVolumeSpecName "kube-api-access-jkzwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.753486 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-584b845d6f-78cbh" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.149:8080/healthcheck\": dial tcp 10.217.0.149:8080: connect: connection refused" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.753507 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-584b845d6f-78cbh" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.149:8080/healthcheck\": dial tcp 10.217.0.149:8080: connect: connection refused" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.812905 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxvd\" (UniqueName: \"kubernetes.io/projected/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-kube-api-access-pnxvd\") pod \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.813085 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-config\") pod \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.814020 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-nb\") pod \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.814390 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-svc\") pod \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.814413 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-swift-storage-0\") pod \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.814431 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-sb\") pod \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\" (UID: \"f3e26c07-9fe5-4b2b-b7bc-1b008e904792\") " Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.820388 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-kube-api-access-pnxvd" (OuterVolumeSpecName: "kube-api-access-pnxvd") pod "f3e26c07-9fe5-4b2b-b7bc-1b008e904792" (UID: "f3e26c07-9fe5-4b2b-b7bc-1b008e904792"). InnerVolumeSpecName "kube-api-access-pnxvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.825542 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.825574 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54fb90e-29d7-4df9-b09f-bd992972dc88-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.825659 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkzwp\" (UniqueName: \"kubernetes.io/projected/d54fb90e-29d7-4df9-b09f-bd992972dc88-kube-api-access-jkzwp\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.825674 4895 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d54fb90e-29d7-4df9-b09f-bd992972dc88-ovs-rundir\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.862594 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.879678 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.935434 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxvd\" (UniqueName: \"kubernetes.io/projected/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-kube-api-access-pnxvd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.936922 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.937041 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.955917 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell01ea0-account-delete-2sg72"] Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.956980 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d54fb90e-29d7-4df9-b09f-bd992972dc88" (UID: "d54fb90e-29d7-4df9-b09f-bd992972dc88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.981613 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3e26c07-9fe5-4b2b-b7bc-1b008e904792" (UID: "f3e26c07-9fe5-4b2b-b7bc-1b008e904792"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:34 crc kubenswrapper[4895]: I1206 07:31:34.992152 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3e26c07-9fe5-4b2b-b7bc-1b008e904792" (UID: "f3e26c07-9fe5-4b2b-b7bc-1b008e904792"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.000704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cf257472-30d7-4719-9b36-47c30f1db7ec" (UID: "cf257472-30d7-4719-9b36-47c30f1db7ec"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.011208 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3e26c07-9fe5-4b2b-b7bc-1b008e904792" (UID: "f3e26c07-9fe5-4b2b-b7bc-1b008e904792"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.042248 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.042286 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.042301 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf257472-30d7-4719-9b36-47c30f1db7ec-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.042314 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.042324 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.059654 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapif7f2-account-delete-zxqsn"] Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.105208 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q82ps"] Dec 06 07:31:35 crc kubenswrapper[4895]: E1206 07:31:35.105797 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54fb90e-29d7-4df9-b09f-bd992972dc88" containerName="openstack-network-exporter" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.105814 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54fb90e-29d7-4df9-b09f-bd992972dc88" containerName="openstack-network-exporter" Dec 06 07:31:35 crc kubenswrapper[4895]: E1206 07:31:35.106056 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="ovsdbserver-sb" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106068 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="ovsdbserver-sb" Dec 06 07:31:35 crc kubenswrapper[4895]: E1206 07:31:35.106203 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="init" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106239 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="init" Dec 06 07:31:35 crc kubenswrapper[4895]: E1206 07:31:35.106266 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="openstack-network-exporter" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106275 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="openstack-network-exporter" Dec 06 07:31:35 crc kubenswrapper[4895]: E1206 07:31:35.106296 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="dnsmasq-dns" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106304 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="dnsmasq-dns" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106608 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="ovsdbserver-sb" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106632 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" containerName="dnsmasq-dns" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106649 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" containerName="openstack-network-exporter" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.106660 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54fb90e-29d7-4df9-b09f-bd992972dc88" containerName="openstack-network-exporter" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.108347 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.111938 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.141120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a617d6b4-721c-4087-bc16-70bcb58b9c69" (UID: "a617d6b4-721c-4087-bc16-70bcb58b9c69"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.143783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d54fb90e-29d7-4df9-b09f-bd992972dc88" (UID: "d54fb90e-29d7-4df9-b09f-bd992972dc88"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.149725 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45cj\" (UniqueName: \"kubernetes.io/projected/a4b2fb89-5631-493f-9afe-51e41f81bdd2-kube-api-access-t45cj\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.149840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-catalog-content\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.149980 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-utilities\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.150128 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.150188 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54fb90e-29d7-4df9-b09f-bd992972dc88-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.150243 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a617d6b4-721c-4087-bc16-70bcb58b9c69-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.166823 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q82ps"] Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.198168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3e26c07-9fe5-4b2b-b7bc-1b008e904792" (UID: "f3e26c07-9fe5-4b2b-b7bc-1b008e904792"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.210707 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-config" (OuterVolumeSpecName: "config") pod "f3e26c07-9fe5-4b2b-b7bc-1b008e904792" (UID: "f3e26c07-9fe5-4b2b-b7bc-1b008e904792"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.252211 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45cj\" (UniqueName: \"kubernetes.io/projected/a4b2fb89-5631-493f-9afe-51e41f81bdd2-kube-api-access-t45cj\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.252708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-catalog-content\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.252919 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-utilities\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.253169 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.253266 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e26c07-9fe5-4b2b-b7bc-1b008e904792-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.253634 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-catalog-content\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.253852 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-utilities\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.275002 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45cj\" (UniqueName: \"kubernetes.io/projected/a4b2fb89-5631-493f-9afe-51e41f81bdd2-kube-api-access-t45cj\") pod \"redhat-marketplace-q82ps\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.371761 4895 scope.go:117] "RemoveContainer" containerID="84e285fd54592923b82baf0ba0638e2645a70b6ae64c423fef3c15fbd472c99c" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.539515 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec28d57e-8ecf-4415-b18f-69bfa0514187/ovsdbserver-nb/0.log" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.539649 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.563667 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g8w7\" (UniqueName: \"kubernetes.io/projected/ec28d57e-8ecf-4415-b18f-69bfa0514187-kube-api-access-5g8w7\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.563845 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-scripts\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.563888 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-combined-ca-bundle\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.563923 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.564021 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdb-rundir\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.564045 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-metrics-certs-tls-certs\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.564109 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-config\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.564152 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdbserver-nb-tls-certs\") pod \"ec28d57e-8ecf-4415-b18f-69bfa0514187\" (UID: \"ec28d57e-8ecf-4415-b18f-69bfa0514187\") " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.567699 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-scripts" (OuterVolumeSpecName: "scripts") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.569535 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.571772 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-config" (OuterVolumeSpecName: "config") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.572758 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.596697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.598621 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec28d57e-8ecf-4415-b18f-69bfa0514187-kube-api-access-5g8w7" (OuterVolumeSpecName: "kube-api-access-5g8w7") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "kube-api-access-5g8w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.620620 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.636442 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.642635 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.679428 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mpfpb_d54fb90e-29d7-4df9-b09f-bd992972dc88/openstack-network-exporter/0.log" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.679587 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mpfpb" event={"ID":"d54fb90e-29d7-4df9-b09f-bd992972dc88","Type":"ContainerDied","Data":"7dea9abf6c160a0fa3db2198607eebe06d2c62c98eab5afa76f348efad0c356b"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.679677 4895 scope.go:117] "RemoveContainer" containerID="b6a7604abefe1444b026209f2779ebe71e0863f38495e270864aa2d5eaa61e31" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.679850 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mpfpb" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.688003 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g8w7\" (UniqueName: \"kubernetes.io/projected/ec28d57e-8ecf-4415-b18f-69bfa0514187-kube-api-access-5g8w7\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.688176 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.688256 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.688452 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec28d57e-8ecf-4415-b18f-69bfa0514187-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.698848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderdb70-account-delete-lqhcq" event={"ID":"faf28c62-87dc-461a-bf5a-4ae13d62e489","Type":"ContainerStarted","Data":"deb59b1cfce769efc25c95971ca8dcfd676719c828ae598d5652de64a7e5dd44"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.703813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican18b2-account-delete-zdlkh" event={"ID":"d5401d7f-627c-410f-ae61-d7653749a7d3","Type":"ContainerStarted","Data":"c386e4310912ed61dec165e5697f183df34eb6e9e50d37a603ed1727727781c1"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.707776 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell01ea0-account-delete-2sg72" event={"ID":"585dce37-3558-4a0c-8dfb-108c94c1047c","Type":"ContainerStarted","Data":"ea0d3ff0841c376836de078a59dba48e7c0869c878e3f280f81880f03e35c791"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.710814 4895 generic.go:334] "Generic (PLEG): container finished" podID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerID="38bc1a720f1bec95645eb9c53e763a74280470a9f127df6f30b1613abb5aaad5" exitCode=0 Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.710880 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-584b845d6f-78cbh" event={"ID":"09e95af5-a2ad-42ee-83a9-25cef915d0dc","Type":"ContainerDied","Data":"38bc1a720f1bec95645eb9c53e763a74280470a9f127df6f30b1613abb5aaad5"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.712697 4895 generic.go:334] "Generic (PLEG): container finished" podID="549e9969-79a0-45d9-a093-0b58ad1bc359" containerID="085b9f1f4088bb302b9ed0d8a72f3e2d171c7b741007adb8f5288a5c1833bb96" exitCode=0 Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.712759 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"549e9969-79a0-45d9-a093-0b58ad1bc359","Type":"ContainerDied","Data":"085b9f1f4088bb302b9ed0d8a72f3e2d171c7b741007adb8f5288a5c1833bb96"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.714768 4895 generic.go:334] "Generic (PLEG): container finished" podID="275e5518-922b-455d-a5d5-7b072a12ab07" containerID="4f8e9ae1388bc7994b5365380a4bd5e84d80b90cafe1780718a2555d9c3d7e69" exitCode=0 Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.714809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-676f67bc8f-srz2n" event={"ID":"275e5518-922b-455d-a5d5-7b072a12ab07","Type":"ContainerDied","Data":"4f8e9ae1388bc7994b5365380a4bd5e84d80b90cafe1780718a2555d9c3d7e69"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.721820 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="e031534957b9fccda0363a790f71a039ee246b5fbf68723177270eb631a9658b" exitCode=0 Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.721910 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"e031534957b9fccda0363a790f71a039ee246b5fbf68723177270eb631a9658b"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.726024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" event={"ID":"f3e26c07-9fe5-4b2b-b7bc-1b008e904792","Type":"ContainerDied","Data":"02976bc0f560ef01053b74fa3719b53859d6b6208b9ed06403281a79423b72ce"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.726049 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.729654 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapif7f2-account-delete-zxqsn" event={"ID":"5b854481-cd2a-4938-8b82-3288191b5bbe","Type":"ContainerStarted","Data":"6acf7d6ed2598e556c64b834076a5ee3e03373351f775d6679a6ea81ea3ef2bf"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.733357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance55c9-account-delete-4b8gl" event={"ID":"7d199b21-7519-4bbb-adac-07ad0b1e21d9","Type":"ContainerStarted","Data":"0422bcdaaa315ce661fba9ee62b070d34fa4a4c6a29ab06357af15d623bea5c5"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.736813 4895 generic.go:334] "Generic (PLEG): container finished" podID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerID="f8e6a3efd3e56d84034e7d038c15ea6608ff8bc466f71507822f323746904eb8" exitCode=0 Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.736849 4895 generic.go:334] "Generic (PLEG): container finished" podID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerID="81a5ab0803db27cf4248b24bac25718805b76eff190f565fd41b120d159881aa" exitCode=0 Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.736941 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8969e2c-9cc0-40a6-8fee-65d93a9856b0","Type":"ContainerDied","Data":"f8e6a3efd3e56d84034e7d038c15ea6608ff8bc466f71507822f323746904eb8"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.736977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8969e2c-9cc0-40a6-8fee-65d93a9856b0","Type":"ContainerDied","Data":"81a5ab0803db27cf4248b24bac25718805b76eff190f565fd41b120d159881aa"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.741404 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec28d57e-8ecf-4415-b18f-69bfa0514187/ovsdbserver-nb/0.log" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.741447 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec28d57e-8ecf-4415-b18f-69bfa0514187","Type":"ContainerDied","Data":"594536aae5c0a83a981308fc9927279c916f8b6760d5058163af37a69c11e9d5"} Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.741822 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.925066 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 06 07:31:35 crc kubenswrapper[4895]: I1206 07:31:35.975712 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.005213 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.005244 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.011135 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.035892 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "ec28d57e-8ecf-4415-b18f-69bfa0514187" (UID: "ec28d57e-8ecf-4415-b18f-69bfa0514187"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.108349 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.108385 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec28d57e-8ecf-4415-b18f-69bfa0514187-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.108565 4895 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.108622 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.108607361 +0000 UTC m=+2062.509996231 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-api-config-data" not found Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.109105 4895 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.109551 4895 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.109628 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.109606797 +0000 UTC m=+2062.510995667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-scripts" not found Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.136564 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.136489511 +0000 UTC m=+2062.537878381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-config-data" not found Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.150256 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a617d6b4-721c-4087-bc16-70bcb58b9c69" path="/var/lib/kubelet/pods/a617d6b4-721c-4087-bc16-70bcb58b9c69/volumes" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.155082 4895 scope.go:117] "RemoveContainer" containerID="30e02251ada110b0211ea1fdcacbceb80e38625540fb188e0c03ef63b318da58" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.173456 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf257472-30d7-4719-9b36-47c30f1db7ec" path="/var/lib/kubelet/pods/cf257472-30d7-4719-9b36-47c30f1db7ec/volumes" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.237647 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q82ps"] Dec 06 07:31:36 crc kubenswrapper[4895]: W1206 07:31:36.242303 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b2fb89_5631_493f_9afe_51e41f81bdd2.slice/crio-0ce4cddddf67740dae23606c52b92ef8c772ca700defed855eab9bafe621fbf9 WatchSource:0}: Error finding container 0ce4cddddf67740dae23606c52b92ef8c772ca700defed855eab9bafe621fbf9: Status 404 returned error can't find the container with id 0ce4cddddf67740dae23606c52b92ef8c772ca700defed855eab9bafe621fbf9 Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.491568 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.506775 4895 scope.go:117] "RemoveContainer" containerID="06b67abac8970f1634818c6d524e4e49326436da8eb856e8228343ea54865a5d" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.538456 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-j5qpm"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.574722 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-mpfpb"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.584735 4895 scope.go:117] "RemoveContainer" containerID="156f7c175f2582a5ecac54004c4a02a79357a47a8e30e1f567ebf0c9892821d7" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.585094 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-mpfpb"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.594381 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.604637 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.630802 4895 scope.go:117] "RemoveContainer" containerID="214631b61aad2658bf3fd72e9c59668fc147ff6e83cf726ca8f32206cfcc972a" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.753777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance55c9-account-delete-4b8gl" event={"ID":"7d199b21-7519-4bbb-adac-07ad0b1e21d9","Type":"ContainerStarted","Data":"dd8361e49bc19745decbaa9fab4265f3aba5ada82bfa6c483323c5b7995ec623"} Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.764849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerStarted","Data":"0ce4cddddf67740dae23606c52b92ef8c772ca700defed855eab9bafe621fbf9"} Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.766180 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell01ea0-account-delete-2sg72" event={"ID":"585dce37-3558-4a0c-8dfb-108c94c1047c","Type":"ContainerStarted","Data":"10df986128b44b72d83b806681282da55e82a82f513122e8ea557b6136a9bdb2"} Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.823376 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.824208 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.832567 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.832644 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.832892 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.842890 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.842908 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.843384 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-central-agent" containerID="cri-o://1aad065597e380c56e43f02c6466794b5700558a3248e33cb15ece6837bdd069" gracePeriod=30 Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.844089 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="proxy-httpd" containerID="cri-o://f608b8f2d91f359e66e47a2170b12aa2033c9aa3fd46ebe5c13e759c317a8d0b" gracePeriod=30 Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.844147 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="sg-core" containerID="cri-o://24e243d683f4d1e457142494c7ae9f7b77e4d37cba6f0178e3fdb24f773dad15" gracePeriod=30 Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.844194 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-notification-agent" containerID="cri-o://bb9f938ac64e7d7cade7485c2af22ce637c88ad45a88b16d33b845f5c54d44ad" gracePeriod=30 Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.853792 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican18b2-account-delete-zdlkh" podStartSLOduration=5.853760168 podStartE2EDuration="5.853760168s" podCreationTimestamp="2025-12-06 07:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:36.798681406 +0000 UTC m=+2059.200070276" watchObservedRunningTime="2025-12-06 07:31:36.853760168 +0000 UTC m=+2059.255149038" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.913206 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.913449 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0b21f280-7879-43c2-b1b0-92906707b4cd" containerName="kube-state-metrics" containerID="cri-o://e60969eee4c68df103db40b7065c6b53b4e367127a610a66a55e83f51a3a1f69" gracePeriod=30 Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.922681 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinderdb70-account-delete-lqhcq" podStartSLOduration=6.9226561570000005 podStartE2EDuration="6.922656157s" podCreationTimestamp="2025-12-06 07:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:36.82749963 +0000 UTC m=+2059.228888500" watchObservedRunningTime="2025-12-06 07:31:36.922656157 +0000 UTC m=+2059.324045027" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.935607 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="0b21f280-7879-43c2-b1b0-92906707b4cd" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.953684 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:36 crc kubenswrapper[4895]: E1206 07:31:36.953751 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.960276 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:43806->10.217.0.209:8775: read: connection reset by peer" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.962338 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:43818->10.217.0.209:8775: read: connection reset by peer" Dec 06 07:31:36 crc kubenswrapper[4895]: I1206 07:31:36.963968 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.044932 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.045146 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerName="memcached" containerID="cri-o://9d2fe9419d1d71a0bda4ee42e39fc68443c6fa554ca6ce504dec36d80c892830" gracePeriod=30 Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.072693 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" probeResult="failure" output=< Dec 06 07:31:37 crc kubenswrapper[4895]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Dec 06 07:31:37 crc kubenswrapper[4895]: > Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.173178 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-86j7g"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.208452 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mbg9p"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.215149 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-86j7g"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.223434 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mbg9p"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.241702 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.250806 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55c5754c9b-2g2cg" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:52982->10.217.0.155:9311: read: connection reset by peer" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.250881 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55c5754c9b-2g2cg" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:52996->10.217.0.155:9311: read: connection reset by peer" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.253329 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.255954 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5b7499d868-f7bk5"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.256266 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5b7499d868-f7bk5" podUID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" containerName="keystone-api" containerID="cri-o://b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a" gracePeriod=30 Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.334163 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.334324 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nv6m9"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.354040 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.362584 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nv6m9"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.368604 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-10a2-account-create-update-4nnhv"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.386414 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-10a2-account-create-update-4nnhv"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.400688 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.161:8776/healthcheck\": read tcp 10.217.0.2:60348->10.217.0.161:8776: read: connection reset by peer" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.417976 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.425640 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb44w\" (UniqueName: \"kubernetes.io/projected/549e9969-79a0-45d9-a093-0b58ad1bc359-kube-api-access-fb44w\") pod \"549e9969-79a0-45d9-a093-0b58ad1bc359\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.425798 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-nova-novncproxy-tls-certs\") pod \"549e9969-79a0-45d9-a093-0b58ad1bc359\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.425881 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-vencrypt-tls-certs\") pod \"549e9969-79a0-45d9-a093-0b58ad1bc359\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.425925 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-config-data\") pod \"549e9969-79a0-45d9-a093-0b58ad1bc359\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.425998 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-combined-ca-bundle\") pod \"549e9969-79a0-45d9-a093-0b58ad1bc359\" (UID: \"549e9969-79a0-45d9-a093-0b58ad1bc359\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.434572 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549e9969-79a0-45d9-a093-0b58ad1bc359-kube-api-access-fb44w" (OuterVolumeSpecName: "kube-api-access-fb44w") pod "549e9969-79a0-45d9-a093-0b58ad1bc359" (UID: "549e9969-79a0-45d9-a093-0b58ad1bc359"). InnerVolumeSpecName "kube-api-access-fb44w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.504948 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "549e9969-79a0-45d9-a093-0b58ad1bc359" (UID: "549e9969-79a0-45d9-a093-0b58ad1bc359"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: E1206 07:31:37.516205 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.517312 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "549e9969-79a0-45d9-a093-0b58ad1bc359" (UID: "549e9969-79a0-45d9-a093-0b58ad1bc359"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: E1206 07:31:37.519263 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:37 crc kubenswrapper[4895]: E1206 07:31:37.523024 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:37 crc kubenswrapper[4895]: E1206 07:31:37.523201 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528253 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-combined-ca-bundle\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528358 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvs8\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-kube-api-access-hqvs8\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528385 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data-custom\") pod \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528427 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-log-httpd\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data\") pod \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528616 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-combined-ca-bundle\") pod \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528668 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-etc-machine-id\") pod \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528696 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-scripts\") pod \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-internal-tls-certs\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528817 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-config-data\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-run-httpd\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528897 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8969e2c-9cc0-40a6-8fee-65d93a9856b0" (UID: "c8969e2c-9cc0-40a6-8fee-65d93a9856b0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528930 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-public-tls-certs\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.528983 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-etc-swift\") pod \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\" (UID: \"09e95af5-a2ad-42ee-83a9-25cef915d0dc\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.529000 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csmvs\" (UniqueName: \"kubernetes.io/projected/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-kube-api-access-csmvs\") pod \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\" (UID: \"c8969e2c-9cc0-40a6-8fee-65d93a9856b0\") " Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.529410 4895 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.529423 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.529433 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.529442 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb44w\" (UniqueName: \"kubernetes.io/projected/549e9969-79a0-45d9-a093-0b58ad1bc359-kube-api-access-fb44w\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.532516 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.535561 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.548761 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-kube-api-access-hqvs8" (OuterVolumeSpecName: "kube-api-access-hqvs8") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "kube-api-access-hqvs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.548783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8969e2c-9cc0-40a6-8fee-65d93a9856b0" (UID: "c8969e2c-9cc0-40a6-8fee-65d93a9856b0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.549053 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-kube-api-access-csmvs" (OuterVolumeSpecName: "kube-api-access-csmvs") pod "c8969e2c-9cc0-40a6-8fee-65d93a9856b0" (UID: "c8969e2c-9cc0-40a6-8fee-65d93a9856b0"). InnerVolumeSpecName "kube-api-access-csmvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.552033 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.563232 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-scripts" (OuterVolumeSpecName: "scripts") pod "c8969e2c-9cc0-40a6-8fee-65d93a9856b0" (UID: "c8969e2c-9cc0-40a6-8fee-65d93a9856b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.566789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "549e9969-79a0-45d9-a093-0b58ad1bc359" (UID: "549e9969-79a0-45d9-a093-0b58ad1bc359"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.607340 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-config-data" (OuterVolumeSpecName: "config-data") pod "549e9969-79a0-45d9-a093-0b58ad1bc359" (UID: "549e9969-79a0-45d9-a093-0b58ad1bc359"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631733 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvs8\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-kube-api-access-hqvs8\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631866 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631881 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631895 4895 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631907 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631921 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549e9969-79a0-45d9-a093-0b58ad1bc359-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631931 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e95af5-a2ad-42ee-83a9-25cef915d0dc-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631943 4895 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09e95af5-a2ad-42ee-83a9-25cef915d0dc-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.631954 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csmvs\" (UniqueName: \"kubernetes.io/projected/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-kube-api-access-csmvs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.635854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.670178 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" containerID="cri-o://ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" gracePeriod=30 Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.734604 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.748359 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.759406 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8969e2c-9cc0-40a6-8fee-65d93a9856b0" (UID: "c8969e2c-9cc0-40a6-8fee-65d93a9856b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.782830 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-config-data" (OuterVolumeSpecName: "config-data") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.788454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc074-account-delete-5dkk6" event={"ID":"a103ad6f-b726-4ad6-9aec-a689a74a4304","Type":"ContainerStarted","Data":"7c315d743ccddde23168adb0dbfeb519c3eeecd7d636fc309c38efb844cb926d"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.796621 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerID="b540a38897b5cccc650243ffed7745515a30b25bc0e32302ca392cf373ef7ac7" exitCode=0 Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.796780 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124","Type":"ContainerDied","Data":"b540a38897b5cccc650243ffed7745515a30b25bc0e32302ca392cf373ef7ac7"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.802288 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "09e95af5-a2ad-42ee-83a9-25cef915d0dc" (UID: "09e95af5-a2ad-42ee-83a9-25cef915d0dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.817698 4895 generic.go:334] "Generic (PLEG): container finished" podID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerID="9eef442bb2d8f458e303a110fade0f3f49397f1f834b6ae2ba2186b103907a9d" exitCode=0 Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.817834 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" event={"ID":"0921ccd3-f346-46b9-88af-e165de8ff32b","Type":"ContainerDied","Data":"9eef442bb2d8f458e303a110fade0f3f49397f1f834b6ae2ba2186b103907a9d"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.823864 4895 generic.go:334] "Generic (PLEG): container finished" podID="a2e55858-e444-489b-b573-aae00aa71f9b" containerID="8673bbafab775ecc7fd8889b8d5591b54abb229821a4e6d6b5e7d9f3b443717d" exitCode=0 Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.823917 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f74647bf4-9dcq9" event={"ID":"a2e55858-e444-489b-b573-aae00aa71f9b","Type":"ContainerDied","Data":"8673bbafab775ecc7fd8889b8d5591b54abb229821a4e6d6b5e7d9f3b443717d"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.826795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapif7f2-account-delete-zxqsn" event={"ID":"5b854481-cd2a-4938-8b82-3288191b5bbe","Type":"ContainerStarted","Data":"9da3b6aaa51a3296d5d8d42889f5a2f616e05c11f48f276b9f01a0a860f8d302"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.830184 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8969e2c-9cc0-40a6-8fee-65d93a9856b0","Type":"ContainerDied","Data":"2534378b684637aebc6ff80110820bf74c2b365e2603446c2429642cda86afa4"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.830229 4895 scope.go:117] "RemoveContainer" containerID="f8e6a3efd3e56d84034e7d038c15ea6608ff8bc466f71507822f323746904eb8" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.830374 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.837567 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.837593 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.837602 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.837613 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09e95af5-a2ad-42ee-83a9-25cef915d0dc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.839127 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00ad-account-delete-6xhps" event={"ID":"b6fc6ccb-af32-472e-b0f5-11cb224b4885","Type":"ContainerStarted","Data":"d685d94fb1c3d4e8614fa5f946378b11f947c0609dac3cdffe0954dbfa36810e"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.847568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-584b845d6f-78cbh" event={"ID":"09e95af5-a2ad-42ee-83a9-25cef915d0dc","Type":"ContainerDied","Data":"6f2278c32b741a48d123950d9602d019b22c91d2122a0e9157e0ceb65ac69979"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.847694 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-584b845d6f-78cbh" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.859555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"549e9969-79a0-45d9-a093-0b58ad1bc359","Type":"ContainerDied","Data":"0c065bf556905aa1adb384bc9e80da8881e65efb45a01f3131fd07885dbc677c"} Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.859703 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.863899 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data" (OuterVolumeSpecName: "config-data") pod "c8969e2c-9cc0-40a6-8fee-65d93a9856b0" (UID: "c8969e2c-9cc0-40a6-8fee-65d93a9856b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.942847 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8969e2c-9cc0-40a6-8fee-65d93a9856b0-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:37 crc kubenswrapper[4895]: I1206 07:31:37.988061 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-584b845d6f-78cbh"] Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.009491 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-584b845d6f-78cbh"] Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.083735 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" path="/var/lib/kubelet/pods/09e95af5-a2ad-42ee-83a9-25cef915d0dc/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.084703 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="693e844e-e267-4ca9-b338-f5c4c709067f" path="/var/lib/kubelet/pods/693e844e-e267-4ca9-b338-f5c4c709067f/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.085860 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af04787-363a-4359-b668-61407d87da63" path="/var/lib/kubelet/pods/8af04787-363a-4359-b668-61407d87da63/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.087319 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac89f8a-b835-4356-af05-a02cd8f079ea" path="/var/lib/kubelet/pods/cac89f8a-b835-4356-af05-a02cd8f079ea/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.097539 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54fb90e-29d7-4df9-b09f-bd992972dc88" path="/var/lib/kubelet/pods/d54fb90e-29d7-4df9-b09f-bd992972dc88/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.098985 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea52d6be-93a0-445d-86a9-1061722b36b1" path="/var/lib/kubelet/pods/ea52d6be-93a0-445d-86a9-1061722b36b1/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.101169 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" path="/var/lib/kubelet/pods/ec28d57e-8ecf-4415-b18f-69bfa0514187/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.103391 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e26c07-9fe5-4b2b-b7bc-1b008e904792" path="/var/lib/kubelet/pods/f3e26c07-9fe5-4b2b-b7bc-1b008e904792/volumes" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.104519 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.112982 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.160398 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.160481 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data podName:e963d73b-d3f2-4c70-8dbd-687b3fc1962d nodeName:}" failed. No retries permitted until 2025-12-06 07:31:46.160453437 +0000 UTC m=+2068.561842307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data") pod "rabbitmq-cell1-server-0" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d") : configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.180189 4895 scope.go:117] "RemoveContainer" containerID="81a5ab0803db27cf4248b24bac25718805b76eff190f565fd41b120d159881aa" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.279678 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.284381 4895 scope.go:117] "RemoveContainer" containerID="38bc1a720f1bec95645eb9c53e763a74280470a9f127df6f30b1613abb5aaad5" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.288939 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.304952 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.307022 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.309464 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.309539 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.451675 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.451991 4895 scope.go:117] "RemoveContainer" containerID="01b7e75885151f8a28d8cb5829b199e20877666375bdf87e612aef852c6f6c46" Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.454532 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.456132 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.456177 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.478204 4895 scope.go:117] "RemoveContainer" containerID="085b9f1f4088bb302b9ed0d8a72f3e2d171c7b741007adb8f5288a5c1833bb96" Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.790164 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.790466 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data podName:8fa39160-bfb2-49ae-b2ca-12c0e5788996 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:46.790450736 +0000 UTC m=+2069.191839616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data") pod "rabbitmq-server-0" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996") : configmap "rabbitmq-config-data" not found Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.877937 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerID="24e243d683f4d1e457142494c7ae9f7b77e4d37cba6f0178e3fdb24f773dad15" exitCode=2 Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.878012 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerDied","Data":"24e243d683f4d1e457142494c7ae9f7b77e4d37cba6f0178e3fdb24f773dad15"} Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.881166 4895 generic.go:334] "Generic (PLEG): container finished" podID="0b21f280-7879-43c2-b1b0-92906707b4cd" containerID="e60969eee4c68df103db40b7065c6b53b4e367127a610a66a55e83f51a3a1f69" exitCode=2 Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.881249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b21f280-7879-43c2-b1b0-92906707b4cd","Type":"ContainerDied","Data":"e60969eee4c68df103db40b7065c6b53b4e367127a610a66a55e83f51a3a1f69"} Dec 06 07:31:38 crc kubenswrapper[4895]: I1206 07:31:38.883862 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerStarted","Data":"d05aa72fcf022c23779d886143de215b691ee850f2b0cbebda35bb5a7d8b8e59"} Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.914943 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.916646 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.918075 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:38 crc kubenswrapper[4895]: E1206 07:31:38.918204 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.168724 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55c5754c9b-2g2cg" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": dial tcp 10.217.0.155:9311: connect: connection refused" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.169049 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55c5754c9b-2g2cg" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": dial tcp 10.217.0.155:9311: connect: connection refused" Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.567448 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f is running failed: container process not found" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.568632 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f is running failed: container process not found" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.569142 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f is running failed: container process not found" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.569224 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerName="nova-cell1-conductor-conductor" Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.699635 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.714427 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.715895 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:39 crc kubenswrapper[4895]: E1206 07:31:39.715986 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.726681 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": dial tcp 10.217.0.209:8775: connect: connection refused" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.726928 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": dial tcp 10.217.0.209:8775: connect: connection refused" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.851560 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.915339 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.915953 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.916575 4895 generic.go:334] "Generic (PLEG): container finished" podID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerID="5c1298d7ec7ec06c2816c3c5d51d11ddd9f42ecf5beced52e8e430776e865dc9" exitCode=0 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.916703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5739e86-0fb8-4368-91ae-f2a09bb9848c","Type":"ContainerDied","Data":"5c1298d7ec7ec06c2816c3c5d51d11ddd9f42ecf5beced52e8e430776e865dc9"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.924900 4895 generic.go:334] "Generic (PLEG): container finished" podID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerID="22b5d520fb5b105ce641543b32b2d0e8817cc0873901f3804e0a3eecf575919f" exitCode=0 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.925011 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c5754c9b-2g2cg" event={"ID":"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc","Type":"ContainerDied","Data":"22b5d520fb5b105ce641543b32b2d0e8817cc0873901f3804e0a3eecf575919f"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.927163 4895 generic.go:334] "Generic (PLEG): container finished" podID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerID="c386e4310912ed61dec165e5697f183df34eb6e9e50d37a603ed1727727781c1" exitCode=1 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.927221 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican18b2-account-delete-zdlkh" event={"ID":"d5401d7f-627c-410f-ae61-d7653749a7d3","Type":"ContainerDied","Data":"c386e4310912ed61dec165e5697f183df34eb6e9e50d37a603ed1727727781c1"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.927677 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican18b2-account-delete-zdlkh" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.927721 4895 scope.go:117] "RemoveContainer" containerID="c386e4310912ed61dec165e5697f183df34eb6e9e50d37a603ed1727727781c1" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.931264 4895 generic.go:334] "Generic (PLEG): container finished" podID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerID="b777a5a301fd4f77108bae41580433834a4fcb0d280446b3355223fe60884897" exitCode=0 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.931314 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04293385-9ad8-4686-a3d3-e39d586a7e6f","Type":"ContainerDied","Data":"b777a5a301fd4f77108bae41580433834a4fcb0d280446b3355223fe60884897"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.937220 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" event={"ID":"0921ccd3-f346-46b9-88af-e165de8ff32b","Type":"ContainerDied","Data":"d081981a6580c0efb5613bcfab20f4197943160d8b01ee30f1f22f1cebb943c2"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.937274 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c997494-lcf5z" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.937647 4895 scope.go:117] "RemoveContainer" containerID="9eef442bb2d8f458e303a110fade0f3f49397f1f834b6ae2ba2186b103907a9d" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.943176 4895 generic.go:334] "Generic (PLEG): container finished" podID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerID="9d2fe9419d1d71a0bda4ee42e39fc68443c6fa554ca6ce504dec36d80c892830" exitCode=0 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.943246 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6","Type":"ContainerDied","Data":"9d2fe9419d1d71a0bda4ee42e39fc68443c6fa554ca6ce504dec36d80c892830"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.956965 4895 generic.go:334] "Generic (PLEG): container finished" podID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerID="deb59b1cfce769efc25c95971ca8dcfd676719c828ae598d5652de64a7e5dd44" exitCode=1 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.957057 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderdb70-account-delete-lqhcq" event={"ID":"faf28c62-87dc-461a-bf5a-4ae13d62e489","Type":"ContainerDied","Data":"deb59b1cfce769efc25c95971ca8dcfd676719c828ae598d5652de64a7e5dd44"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.957808 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinderdb70-account-delete-lqhcq" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.957852 4895 scope.go:117] "RemoveContainer" containerID="deb59b1cfce769efc25c95971ca8dcfd676719c828ae598d5652de64a7e5dd44" Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.961242 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerID="d05aa72fcf022c23779d886143de215b691ee850f2b0cbebda35bb5a7d8b8e59" exitCode=0 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.961306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerDied","Data":"d05aa72fcf022c23779d886143de215b691ee850f2b0cbebda35bb5a7d8b8e59"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.969568 4895 generic.go:334] "Generic (PLEG): container finished" podID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerID="32263a683bb152fcade8d6ea711b2647b1cdd056c745a933576b1f81801ceca7" exitCode=0 Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.969633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52d7b5bf-eae3-4832-b13b-be5f0734e4bb","Type":"ContainerDied","Data":"32263a683bb152fcade8d6ea711b2647b1cdd056c745a933576b1f81801ceca7"} Dec 06 07:31:39 crc kubenswrapper[4895]: I1206 07:31:39.999126 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.022559 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0921ccd3-f346-46b9-88af-e165de8ff32b-logs\") pod \"0921ccd3-f346-46b9-88af-e165de8ff32b\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.022759 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-scripts\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.022845 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-combined-ca-bundle\") pod \"0921ccd3-f346-46b9-88af-e165de8ff32b\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.022968 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-public-tls-certs\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-config-data\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-combined-ca-bundle\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023199 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data\") pod \"0921ccd3-f346-46b9-88af-e165de8ff32b\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023264 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n65q\" (UniqueName: \"kubernetes.io/projected/0921ccd3-f346-46b9-88af-e165de8ff32b-kube-api-access-6n65q\") pod \"0921ccd3-f346-46b9-88af-e165de8ff32b\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023386 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e55858-e444-489b-b573-aae00aa71f9b-logs\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023484 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-internal-tls-certs\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023553 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data-custom\") pod \"0921ccd3-f346-46b9-88af-e165de8ff32b\" (UID: \"0921ccd3-f346-46b9-88af-e165de8ff32b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.023630 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh7f6\" (UniqueName: \"kubernetes.io/projected/a2e55858-e444-489b-b573-aae00aa71f9b-kube-api-access-zh7f6\") pod \"a2e55858-e444-489b-b573-aae00aa71f9b\" (UID: \"a2e55858-e444-489b-b573-aae00aa71f9b\") " Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.024186 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.024289 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts podName:d5401d7f-627c-410f-ae61-d7653749a7d3 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.52427631 +0000 UTC m=+2062.925665180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts") pod "barbican18b2-account-delete-zdlkh" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.028162 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.028302 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts podName:faf28c62-87dc-461a-bf5a-4ae13d62e489 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.528280467 +0000 UTC m=+2062.929669337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts") pod "cinderdb70-account-delete-lqhcq" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.026248 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.024870 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerID="f608b8f2d91f359e66e47a2170b12aa2033c9aa3fd46ebe5c13e759c317a8d0b" exitCode=0 Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.030090 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerID="1aad065597e380c56e43f02c6466794b5700558a3248e33cb15ece6837bdd069" exitCode=0 Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.026970 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0921ccd3-f346-46b9-88af-e165de8ff32b-logs" (OuterVolumeSpecName: "logs") pod "0921ccd3-f346-46b9-88af-e165de8ff32b" (UID: "0921ccd3-f346-46b9-88af-e165de8ff32b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.024898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerDied","Data":"f608b8f2d91f359e66e47a2170b12aa2033c9aa3fd46ebe5c13e759c317a8d0b"} Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.030357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerDied","Data":"1aad065597e380c56e43f02c6466794b5700558a3248e33cb15ece6837bdd069"} Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.033104 4895 scope.go:117] "RemoveContainer" containerID="f772918d22085feff7f986c973ad071936dcff2e69c1f8d298ba9eba11341e18" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.033591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e55858-e444-489b-b573-aae00aa71f9b-logs" (OuterVolumeSpecName: "logs") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.035659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0921ccd3-f346-46b9-88af-e165de8ff32b-kube-api-access-6n65q" (OuterVolumeSpecName: "kube-api-access-6n65q") pod "0921ccd3-f346-46b9-88af-e165de8ff32b" (UID: "0921ccd3-f346-46b9-88af-e165de8ff32b"). InnerVolumeSpecName "kube-api-access-6n65q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.048260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-scripts" (OuterVolumeSpecName: "scripts") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.052989 4895 generic.go:334] "Generic (PLEG): container finished" podID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" exitCode=0 Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.056428 4895 generic.go:334] "Generic (PLEG): container finished" podID="fa5d7561-2042-4dcc-8ddc-336475230720" containerID="5f86f88a0048b09a72677b652eb94fd21ab7d1447989850d3c2a784d667b1b12" exitCode=0 Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.068610 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/neutronc074-account-delete-5dkk6" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.069095 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/placement00ad-account-delete-6xhps" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.069258 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f74647bf4-9dcq9" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.069681 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapif7f2-account-delete-zxqsn" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.069885 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance55c9-account-delete-4b8gl" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.069988 4895 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell01ea0-account-delete-2sg72" secret="" err="secret \"galera-openstack-dockercfg-bd26w\" not found" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.090157 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e55858-e444-489b-b573-aae00aa71f9b-kube-api-access-zh7f6" (OuterVolumeSpecName: "kube-api-access-zh7f6") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "kube-api-access-zh7f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.096504 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549e9969-79a0-45d9-a093-0b58ad1bc359" path="/var/lib/kubelet/pods/549e9969-79a0-45d9-a093-0b58ad1bc359/volumes" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.097601 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" path="/var/lib/kubelet/pods/c8969e2c-9cc0-40a6-8fee-65d93a9856b0/volumes" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.115145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33f761e3-7f6a-4c1b-b41d-32a14558a756","Type":"ContainerDied","Data":"ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f"} Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.115187 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa5d7561-2042-4dcc-8ddc-336475230720","Type":"ContainerDied","Data":"5f86f88a0048b09a72677b652eb94fd21ab7d1447989850d3c2a784d667b1b12"} Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.115211 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f74647bf4-9dcq9" event={"ID":"a2e55858-e444-489b-b573-aae00aa71f9b","Type":"ContainerDied","Data":"339faa25e0fb858105c5c9e060e0c57850eba2eeb8824b7cc565400eb544287b"} Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.118718 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0921ccd3-f346-46b9-88af-e165de8ff32b" (UID: "0921ccd3-f346-46b9-88af-e165de8ff32b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.130519 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-combined-ca-bundle\") pod \"0b21f280-7879-43c2-b1b0-92906707b4cd\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.130581 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn59c\" (UniqueName: \"kubernetes.io/projected/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-api-access-rn59c\") pod \"0b21f280-7879-43c2-b1b0-92906707b4cd\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.130744 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-certs\") pod \"0b21f280-7879-43c2-b1b0-92906707b4cd\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.130884 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-config\") pod \"0b21f280-7879-43c2-b1b0-92906707b4cd\" (UID: \"0b21f280-7879-43c2-b1b0-92906707b4cd\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.132865 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n65q\" (UniqueName: \"kubernetes.io/projected/0921ccd3-f346-46b9-88af-e165de8ff32b-kube-api-access-6n65q\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.132884 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e55858-e444-489b-b573-aae00aa71f9b-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.132899 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.132960 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh7f6\" (UniqueName: \"kubernetes.io/projected/a2e55858-e444-489b-b573-aae00aa71f9b-kube-api-access-zh7f6\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.132972 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0921ccd3-f346-46b9-88af-e165de8ff32b-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.132984 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.133062 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.133112 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.63309743 +0000 UTC m=+2063.034486300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134282 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134317 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.634307653 +0000 UTC m=+2063.035696513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134366 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134390 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.634381265 +0000 UTC m=+2063.035770135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134429 4895 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134450 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:48.134444666 +0000 UTC m=+2070.535833536 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134561 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134588 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.63457856 +0000 UTC m=+2063.035967430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.134501 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.135534 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:31:40.635521325 +0000 UTC m=+2063.036910195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.135813 4895 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.136504 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:48.136494571 +0000 UTC m=+2070.537883441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-api-config-data" not found Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.145682 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutronc074-account-delete-5dkk6" podStartSLOduration=10.145652704 podStartE2EDuration="10.145652704s" podCreationTimestamp="2025-12-06 07:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:40.090974072 +0000 UTC m=+2062.492362942" watchObservedRunningTime="2025-12-06 07:31:40.145652704 +0000 UTC m=+2062.547041574" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.151055 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-api-access-rn59c" (OuterVolumeSpecName: "kube-api-access-rn59c") pod "0b21f280-7879-43c2-b1b0-92906707b4cd" (UID: "0b21f280-7879-43c2-b1b0-92906707b4cd"). InnerVolumeSpecName "kube-api-access-rn59c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.198812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.199647 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement00ad-account-delete-6xhps" podStartSLOduration=9.199622958 podStartE2EDuration="9.199622958s" podCreationTimestamp="2025-12-06 07:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:40.109309569 +0000 UTC m=+2062.510698459" watchObservedRunningTime="2025-12-06 07:31:40.199622958 +0000 UTC m=+2062.601011828" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.219451 4895 scope.go:117] "RemoveContainer" containerID="8673bbafab775ecc7fd8889b8d5591b54abb229821a4e6d6b5e7d9f3b443717d" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.220663 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.222676 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapif7f2-account-delete-zxqsn" podStartSLOduration=9.222649909 podStartE2EDuration="9.222649909s" podCreationTimestamp="2025-12-06 07:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:40.12252893 +0000 UTC m=+2062.523917800" watchObservedRunningTime="2025-12-06 07:31:40.222649909 +0000 UTC m=+2062.624038779" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.224162 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.234456 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn59c\" (UniqueName: \"kubernetes.io/projected/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-api-access-rn59c\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.234506 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.234674 4895 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.234797 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data podName:52d7b5bf-eae3-4832-b13b-be5f0734e4bb nodeName:}" failed. No retries permitted until 2025-12-06 07:31:48.234768211 +0000 UTC m=+2070.636157261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data") pod "cinder-api-0" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb") : secret "cinder-config-data" not found Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.244532 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance55c9-account-delete-4b8gl" podStartSLOduration=9.244508509 podStartE2EDuration="9.244508509s" podCreationTimestamp="2025-12-06 07:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:40.14774125 +0000 UTC m=+2062.549130120" watchObservedRunningTime="2025-12-06 07:31:40.244508509 +0000 UTC m=+2062.645897379" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.252119 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell01ea0-account-delete-2sg72" podStartSLOduration=8.252098681 podStartE2EDuration="8.252098681s" podCreationTimestamp="2025-12-06 07:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:31:40.183844528 +0000 UTC m=+2062.585233398" watchObservedRunningTime="2025-12-06 07:31:40.252098681 +0000 UTC m=+2062.653487551" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.259710 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-config-data" (OuterVolumeSpecName: "config-data") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.275597 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b21f280-7879-43c2-b1b0-92906707b4cd" (UID: "0b21f280-7879-43c2-b1b0-92906707b4cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.284690 4895 scope.go:117] "RemoveContainer" containerID="63e32ce9cc5b3c386ddc984f3d5e3485a878384f7b60dac8c6d438b52980f3be" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.317506 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.161:8776/healthcheck\": dial tcp 10.217.0.161:8776: connect: connection refused" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.335803 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4vnb\" (UniqueName: \"kubernetes.io/projected/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-kube-api-access-m4vnb\") pod \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.335963 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-combined-ca-bundle\") pod \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336035 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-config-data\") pod \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336102 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336152 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-logs\") pod \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336243 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9lph\" (UniqueName: \"kubernetes.io/projected/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-kube-api-access-h9lph\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-public-tls-certs\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336335 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-logs\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336391 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data-custom\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336462 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-public-tls-certs\") pod \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.336974 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-internal-tls-certs\") pod \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\" (UID: \"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.337029 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-internal-tls-certs\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.337097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-combined-ca-bundle\") pod \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\" (UID: \"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc\") " Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.337927 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.337941 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.343005 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0921ccd3-f346-46b9-88af-e165de8ff32b" (UID: "0921ccd3-f346-46b9-88af-e165de8ff32b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.343860 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "0b21f280-7879-43c2-b1b0-92906707b4cd" (UID: "0b21f280-7879-43c2-b1b0-92906707b4cd"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.344205 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-logs" (OuterVolumeSpecName: "logs") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.344708 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data" (OuterVolumeSpecName: "config-data") pod "0921ccd3-f346-46b9-88af-e165de8ff32b" (UID: "0921ccd3-f346-46b9-88af-e165de8ff32b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.344758 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-logs" (OuterVolumeSpecName: "logs") pod "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" (UID: "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.362757 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-kube-api-access-h9lph" (OuterVolumeSpecName: "kube-api-access-h9lph") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "kube-api-access-h9lph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.367101 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.371628 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-kube-api-access-m4vnb" (OuterVolumeSpecName: "kube-api-access-m4vnb") pod "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" (UID: "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124"). InnerVolumeSpecName "kube-api-access-m4vnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.417326 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "0b21f280-7879-43c2-b1b0-92906707b4cd" (UID: "0b21f280-7879-43c2-b1b0-92906707b4cd"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440110 4895 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440153 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440197 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9lph\" (UniqueName: \"kubernetes.io/projected/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-kube-api-access-h9lph\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440211 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440221 4895 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b21f280-7879-43c2-b1b0-92906707b4cd-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440230 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440242 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440271 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4vnb\" (UniqueName: \"kubernetes.io/projected/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-kube-api-access-m4vnb\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.440283 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0921ccd3-f346-46b9-88af-e165de8ff32b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.479947 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.485423 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.488739 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-config-data" (OuterVolumeSpecName: "config-data") pod "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" (UID: "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.499805 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.501141 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" (UID: "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.510583 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" (UID: "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.510987 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a2e55858-e444-489b-b573-aae00aa71f9b" (UID: "a2e55858-e444-489b-b573-aae00aa71f9b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.512329 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data" (OuterVolumeSpecName: "config-data") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.517585 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" (UID: "ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.534029 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" (UID: "24216683-0ca8-44dd-8bfa-a7d0a84cf3cc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.542495 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.542579 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts podName:d5401d7f-627c-410f-ae61-d7653749a7d3 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.542559734 +0000 UTC m=+2063.943948654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts") pod "barbican18b2-account-delete-zdlkh" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.542657 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.542728 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts podName:faf28c62-87dc-461a-bf5a-4ae13d62e489 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.542707708 +0000 UTC m=+2063.944096668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts") pod "cinderdb70-account-delete-lqhcq" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543094 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543117 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543149 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543163 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543172 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543181 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543191 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543201 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543232 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e55858-e444-489b-b573-aae00aa71f9b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.543241 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.587214 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-859c997494-lcf5z"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.595340 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-859c997494-lcf5z"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.636506 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7l2zn"] Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.645676 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.645759 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.645739363 +0000 UTC m=+2064.047128233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646114 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646146 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.646139145 +0000 UTC m=+2064.047528015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646174 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646194 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.646188826 +0000 UTC m=+2064.047577696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646215 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646231 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.646226267 +0000 UTC m=+2064.047615137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646251 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.646268 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:41.646262118 +0000 UTC m=+2064.047650988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.656599 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7l2zn"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.665615 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderdb70-account-delete-lqhcq"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.669114 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db70-account-create-update-7v5lv"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.676463 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db70-account-create-update-7v5lv"] Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.826466 4895 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 06 07:31:40 crc kubenswrapper[4895]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-06T07:31:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 06 07:31:40 crc kubenswrapper[4895]: /etc/init.d/functions: line 589: 816 Alarm clock "$@" Dec 06 07:31:40 crc kubenswrapper[4895]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-cwrlp" message=< Dec 06 07:31:40 crc kubenswrapper[4895]: Exiting ovn-controller (1) [FAILED] Dec 06 07:31:40 crc kubenswrapper[4895]: Killing ovn-controller (1) [ OK ] Dec 06 07:31:40 crc kubenswrapper[4895]: Killing ovn-controller (1) with SIGKILL [ OK ] Dec 06 07:31:40 crc kubenswrapper[4895]: 2025-12-06T07:31:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 06 07:31:40 crc kubenswrapper[4895]: /etc/init.d/functions: line 589: 816 Alarm clock "$@" Dec 06 07:31:40 crc kubenswrapper[4895]: > Dec 06 07:31:40 crc kubenswrapper[4895]: E1206 07:31:40.826539 4895 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 06 07:31:40 crc kubenswrapper[4895]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-06T07:31:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 06 07:31:40 crc kubenswrapper[4895]: /etc/init.d/functions: line 589: 816 Alarm clock "$@" Dec 06 07:31:40 crc kubenswrapper[4895]: > pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" containerID="cri-o://edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.826576 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" containerID="cri-o://edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" gracePeriod=21 Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.909076 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f74647bf4-9dcq9"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.916709 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7f74647bf4-9dcq9"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.987706 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dsfwz"] Dec 06 07:31:40 crc kubenswrapper[4895]: I1206 07:31:40.999856 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dsfwz"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.029128 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronc074-account-delete-5dkk6"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.040766 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c074-account-create-update-925t5"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.047343 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c074-account-create-update-925t5"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.083645 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c5754c9b-2g2cg" event={"ID":"24216683-0ca8-44dd-8bfa-a7d0a84cf3cc","Type":"ContainerDied","Data":"b078dcddcc26910fa28559e80886b0745b977897d5c5be7e16dbd09f5ce2e0cc"} Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.083671 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c5754c9b-2g2cg" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.083700 4895 scope.go:117] "RemoveContainer" containerID="22b5d520fb5b105ce641543b32b2d0e8817cc0873901f3804e0a3eecf575919f" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.087215 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.087225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124","Type":"ContainerDied","Data":"a935bace671adbd26aaaae2fa45e0848dddd673cdbd0fbfdb49bc14c3952f0d6"} Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.099943 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutronc074-account-delete-5dkk6" podUID="a103ad6f-b726-4ad6-9aec-a689a74a4304" containerName="mariadb-account-delete" containerID="cri-o://7c315d743ccddde23168adb0dbfeb519c3eeecd7d636fc309c38efb844cb926d" gracePeriod=30 Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.100124 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.100898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b21f280-7879-43c2-b1b0-92906707b4cd","Type":"ContainerDied","Data":"57a206edc6557032eb035863456d10e7c0ec8a546f7c46c17d0fc8babb96f3e9"} Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.130016 4895 scope.go:117] "RemoveContainer" containerID="f6e1772fc1e3b4a1e20cfa0b70cf73a68a084a2daea75a81ee655fc0fe7abc9e" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.148433 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55c5754c9b-2g2cg"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.158124 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55c5754c9b-2g2cg"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.158835 4895 scope.go:117] "RemoveContainer" containerID="b540a38897b5cccc650243ffed7745515a30b25bc0e32302ca392cf373ef7ac7" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.166629 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.173301 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.193128 4895 scope.go:117] "RemoveContainer" containerID="1c15337273095c402baf16f57c52cac5a3445669c2517cbedf79ba80a4d78a94" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.196226 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.211856 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.227173 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7zmvs"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.235569 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7zmvs"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.237125 4895 scope.go:117] "RemoveContainer" containerID="e60969eee4c68df103db40b7065c6b53b4e367127a610a66a55e83f51a3a1f69" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.243509 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-00ad-account-create-update-qswlt"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.251799 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement00ad-account-delete-6xhps"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.252036 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement00ad-account-delete-6xhps" podUID="b6fc6ccb-af32-472e-b0f5-11cb224b4885" containerName="mariadb-account-delete" containerID="cri-o://d685d94fb1c3d4e8614fa5f946378b11f947c0609dac3cdffe0954dbfa36810e" gracePeriod=30 Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.268099 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-00ad-account-create-update-qswlt"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.395852 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-k87l7"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.417565 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-k87l7"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.434913 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-18b2-account-create-update-wdx9r"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.444109 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-18b2-account-create-update-wdx9r"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.450844 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican18b2-account-delete-zdlkh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.631765 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.631818 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.631857 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts podName:d5401d7f-627c-410f-ae61-d7653749a7d3 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.63183511 +0000 UTC m=+2066.033224020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts") pod "barbican18b2-account-delete-zdlkh" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.631885 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts podName:faf28c62-87dc-461a-bf5a-4ae13d62e489 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.631869481 +0000 UTC m=+2066.033258341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts") pod "cinderdb70-account-delete-lqhcq" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.721853 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vtjtg"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770116 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770181 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.770164723 +0000 UTC m=+2066.171553593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770226 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770306 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.770288466 +0000 UTC m=+2066.171677366 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770354 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770528 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770554 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.770376369 +0000 UTC m=+2066.171765299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.770615 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.770594075 +0000 UTC m=+2066.171982995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.772906 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.772988 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:43.772967777 +0000 UTC m=+2066.174356647 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.775245 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vtjtg"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.782111 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance55c9-account-delete-4b8gl"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.782412 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance55c9-account-delete-4b8gl" podUID="7d199b21-7519-4bbb-adac-07ad0b1e21d9" containerName="mariadb-account-delete" containerID="cri-o://dd8361e49bc19745decbaa9fab4265f3aba5ada82bfa6c483323c5b7995ec623" gracePeriod=30 Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.789256 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-55c9-account-create-update-g9g8t"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.798682 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-55c9-account-create-update-g9g8t"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.802420 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.802849 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.803159 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.803214 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.819070 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.819445 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.819753 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.819842 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.820093 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.825001 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.826289 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:41 crc kubenswrapper[4895]: E1206 07:31:41.826322 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.952870 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vphq2"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.977941 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vphq2"] Dec 06 07:31:41 crc kubenswrapper[4895]: I1206 07:31:41.997412 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f7f2-account-create-update-jvgrz"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.003767 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapif7f2-account-delete-zxqsn"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.004050 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novaapif7f2-account-delete-zxqsn" podUID="5b854481-cd2a-4938-8b82-3288191b5bbe" containerName="mariadb-account-delete" containerID="cri-o://9da3b6aaa51a3296d5d8d42889f5a2f616e05c11f48f276b9f01a0a860f8d302" gracePeriod=30 Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.010025 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f7f2-account-create-update-jvgrz"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.074045 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" path="/var/lib/kubelet/pods/0921ccd3-f346-46b9-88af-e165de8ff32b/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.074729 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b21f280-7879-43c2-b1b0-92906707b4cd" path="/var/lib/kubelet/pods/0b21f280-7879-43c2-b1b0-92906707b4cd/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.080961 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" path="/var/lib/kubelet/pods/24216683-0ca8-44dd-8bfa-a7d0a84cf3cc/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.083101 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b2fc6ab-79ba-4a70-905f-2b3f87437296" path="/var/lib/kubelet/pods/4b2fc6ab-79ba-4a70-905f-2b3f87437296/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.083929 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ccf0195-32f5-499f-95d0-ee3996f78016" path="/var/lib/kubelet/pods/4ccf0195-32f5-499f-95d0-ee3996f78016/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.087749 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5067344e-572d-4b94-af89-552ce31e0f1f" path="/var/lib/kubelet/pods/5067344e-572d-4b94-af89-552ce31e0f1f/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.095216 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdfee21-9315-4df0-9ac9-7f02483a05e3" path="/var/lib/kubelet/pods/6cdfee21-9315-4df0-9ac9-7f02483a05e3/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.095814 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e09ed8-5a57-4021-bdf8-f4e260aabac9" path="/var/lib/kubelet/pods/75e09ed8-5a57-4021-bdf8-f4e260aabac9/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.096429 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946d5931-677d-44a2-8fa9-ab695a62ce1e" path="/var/lib/kubelet/pods/946d5931-677d-44a2-8fa9-ab695a62ce1e/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.097146 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-5b7499d868-f7bk5" podUID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.143:5000/v3\": read tcp 10.217.0.2:34694->10.217.0.143:5000: read: connection reset by peer" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.097844 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" path="/var/lib/kubelet/pods/a2e55858-e444-489b-b573-aae00aa71f9b/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.098413 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad446d6d-ed17-4ce4-8cae-0d570aa84483" path="/var/lib/kubelet/pods/ad446d6d-ed17-4ce4-8cae-0d570aa84483/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.099032 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c822196a-b31e-4f00-9f88-8749f5394fc2" path="/var/lib/kubelet/pods/c822196a-b31e-4f00-9f88-8749f5394fc2/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.099733 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3922fe8-d6ed-4204-90b7-e90cbde97e1b" path="/var/lib/kubelet/pods/d3922fe8-d6ed-4204-90b7-e90cbde97e1b/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.104709 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e181a816-272c-4eb5-8ff3-0b920d27d996" path="/var/lib/kubelet/pods/e181a816-272c-4eb5-8ff3-0b920d27d996/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.105339 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a14678-48ea-424e-9a50-dd28f69b82a3" path="/var/lib/kubelet/pods/e2a14678-48ea-424e-9a50-dd28f69b82a3/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.106191 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef61367b-1e67-4933-9dad-c02352f97789" path="/var/lib/kubelet/pods/ef61367b-1e67-4933-9dad-c02352f97789/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.107456 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" path="/var/lib/kubelet/pods/ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124/volumes" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.108186 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8xgdx"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.108224 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8xgdx"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.108245 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell01ea0-account-delete-2sg72"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.108274 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1ea0-account-create-update-mgbzn"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.108551 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novacell01ea0-account-delete-2sg72" podUID="585dce37-3558-4a0c-8dfb-108c94c1047c" containerName="mariadb-account-delete" containerID="cri-o://10df986128b44b72d83b806681282da55e82a82f513122e8ea557b6136a9bdb2" gracePeriod=30 Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.111999 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1ea0-account-create-update-mgbzn"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.141917 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cwrlp_81e2c836-79af-46e7-8be8-a9b0ffdab060/ovn-controller/0.log" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.141960 4895 generic.go:334] "Generic (PLEG): container finished" podID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" exitCode=137 Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.142011 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp" event={"ID":"81e2c836-79af-46e7-8be8-a9b0ffdab060","Type":"ContainerDied","Data":"edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475"} Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.145425 4895 generic.go:334] "Generic (PLEG): container finished" podID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerID="b02063e5faf708de34c2034eb08a56c2da17f968cfa6ea952d0d97dba87a4a30" exitCode=0 Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.145450 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698445c967-xk6g2" event={"ID":"46664967-bc44-4dd5-8fa7-419d1f7741fd","Type":"ContainerDied","Data":"b02063e5faf708de34c2034eb08a56c2da17f968cfa6ea952d0d97dba87a4a30"} Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.230210 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": dial tcp 10.217.0.175:9292: connect: connection refused" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.231892 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": dial tcp 10.217.0.175:9292: connect: connection refused" Dec 06 07:31:42 crc kubenswrapper[4895]: E1206 07:31:42.507398 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.508581 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:31:42 crc kubenswrapper[4895]: E1206 07:31:42.509708 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:42 crc kubenswrapper[4895]: E1206 07:31:42.523874 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:42 crc kubenswrapper[4895]: E1206 07:31:42.523945 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.550936 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687067 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwmc\" (UniqueName: \"kubernetes.io/projected/33f761e3-7f6a-4c1b-b41d-32a14558a756-kube-api-access-bqwmc\") pod \"33f761e3-7f6a-4c1b-b41d-32a14558a756\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687226 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-logs\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687256 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-config-data\") pod \"33f761e3-7f6a-4c1b-b41d-32a14558a756\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687278 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-public-tls-certs\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687331 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-combined-ca-bundle\") pod \"33f761e3-7f6a-4c1b-b41d-32a14558a756\" (UID: \"33f761e3-7f6a-4c1b-b41d-32a14558a756\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-combined-ca-bundle\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687442 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgmm9\" (UniqueName: \"kubernetes.io/projected/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-kube-api-access-lgmm9\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687537 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-etc-machine-id\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-internal-tls-certs\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.687601 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data\") pod \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\" (UID: \"52d7b5bf-eae3-4832-b13b-be5f0734e4bb\") " Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.688328 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-logs" (OuterVolumeSpecName: "logs") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.688713 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.695107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-kube-api-access-lgmm9" (OuterVolumeSpecName: "kube-api-access-lgmm9") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "kube-api-access-lgmm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.696591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.696984 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f761e3-7f6a-4c1b-b41d-32a14558a756-kube-api-access-bqwmc" (OuterVolumeSpecName: "kube-api-access-bqwmc") pod "33f761e3-7f6a-4c1b-b41d-32a14558a756" (UID: "33f761e3-7f6a-4c1b-b41d-32a14558a756"). InnerVolumeSpecName "kube-api-access-bqwmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.708148 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts" (OuterVolumeSpecName: "scripts") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.723419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-config-data" (OuterVolumeSpecName: "config-data") pod "33f761e3-7f6a-4c1b-b41d-32a14558a756" (UID: "33f761e3-7f6a-4c1b-b41d-32a14558a756"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.725622 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f761e3-7f6a-4c1b-b41d-32a14558a756" (UID: "33f761e3-7f6a-4c1b-b41d-32a14558a756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.730124 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.738515 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.741239 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data" (OuterVolumeSpecName: "config-data") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.748436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "52d7b5bf-eae3-4832-b13b-be5f0734e4bb" (UID: "52d7b5bf-eae3-4832-b13b-be5f0734e4bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789789 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgmm9\" (UniqueName: \"kubernetes.io/projected/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-kube-api-access-lgmm9\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789821 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789830 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789838 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789847 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqwmc\" (UniqueName: \"kubernetes.io/projected/33f761e3-7f6a-4c1b-b41d-32a14558a756-kube-api-access-bqwmc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789855 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789863 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789870 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789879 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789888 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789896 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f761e3-7f6a-4c1b-b41d-32a14558a756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:42 crc kubenswrapper[4895]: I1206 07:31:42.789904 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d7b5bf-eae3-4832-b13b-be5f0734e4bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.154168 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" containerID="b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a" exitCode=0 Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.154258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b7499d868-f7bk5" event={"ID":"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5","Type":"ContainerDied","Data":"b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a"} Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.156939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52d7b5bf-eae3-4832-b13b-be5f0734e4bb","Type":"ContainerDied","Data":"bf66d834e339ee12b0e2afb0490cb60ad70c9104a2ffc19c5925aac61e6835ac"} Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.156979 4895 scope.go:117] "RemoveContainer" containerID="32263a683bb152fcade8d6ea711b2647b1cdd056c745a933576b1f81801ceca7" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.156977 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.160876 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33f761e3-7f6a-4c1b-b41d-32a14558a756","Type":"ContainerDied","Data":"a64d778bdc8c44ff303328a5f385c0fbe2b30f14b75877fc19a0a95d9d843239"} Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.160931 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.206079 4895 scope.go:117] "RemoveContainer" containerID="87e10027541f9994e9a662ec95d1dc56f5cd26543c9ae06d62f955e26582390b" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.212336 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.218783 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.256117 4895 scope.go:117] "RemoveContainer" containerID="ab987bf3d17d2a4f80aabf5e63289da7ed99a8572c492b80f9f3397cd06b9b7f" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.260436 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.317661 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.324735 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.330998 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.331157 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" Dec 06 07:31:43 crc kubenswrapper[4895]: I1206 07:31:43.331892 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.718308 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.718783 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts podName:faf28c62-87dc-461a-bf5a-4ae13d62e489 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.718764948 +0000 UTC m=+2070.120153818 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts") pod "cinderdb70-account-delete-lqhcq" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.718352 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.719070 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts podName:d5401d7f-627c-410f-ae61-d7653749a7d3 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.719060376 +0000 UTC m=+2070.120449246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts") pod "barbican18b2-account-delete-zdlkh" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.819496 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.819812 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.819793901 +0000 UTC m=+2070.221182771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.820038 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.820719 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.820709605 +0000 UTC m=+2070.222098475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.820367 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.820934 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.820926061 +0000 UTC m=+2070.222314931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.821209 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.821314 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.82130423 +0000 UTC m=+2070.222693100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.821316 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.821511 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:47.821501136 +0000 UTC m=+2070.222890006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.896998 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.897730 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.898084 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:43 crc kubenswrapper[4895]: E1206 07:31:43.898212 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.063162 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" path="/var/lib/kubelet/pods/33f761e3-7f6a-4c1b-b41d-32a14558a756/volumes" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.064400 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" path="/var/lib/kubelet/pods/52d7b5bf-eae3-4832-b13b-be5f0734e4bb/volumes" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.065117 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c958c44b-3580-4d17-9b18-65c93cd7d0bf" path="/var/lib/kubelet/pods/c958c44b-3580-4d17-9b18-65c93cd7d0bf/volumes" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.066408 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0479a8-2861-4d5e-a0ae-e7629dede891" path="/var/lib/kubelet/pods/fb0479a8-2861-4d5e-a0ae-e7629dede891/volumes" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.174305 4895 generic.go:334] "Generic (PLEG): container finished" podID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerID="86a69fa460c2d02239e4fcca0e82a4cac5dc6968a1de1b5762253021bb623d96" exitCode=0 Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.174375 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e963d73b-d3f2-4c70-8dbd-687b3fc1962d","Type":"ContainerDied","Data":"86a69fa460c2d02239e4fcca0e82a4cac5dc6968a1de1b5762253021bb623d96"} Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.726181 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": dial tcp 10.217.0.209:8775: connect: connection refused" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.726302 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.726182 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": dial tcp 10.217.0.209:8775: connect: connection refused" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.726383 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 07:31:44 crc kubenswrapper[4895]: I1206 07:31:44.851412 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.189375 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican18b2-account-delete-zdlkh" event={"ID":"d5401d7f-627c-410f-ae61-d7653749a7d3","Type":"ContainerStarted","Data":"5df099b69b140c2bfc00dcd20dcf9cc910e3fcf8d06e8e51a6f93094d065fa12"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.193631 4895 generic.go:334] "Generic (PLEG): container finished" podID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" exitCode=0 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.193694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53540f68-ae58-4d76-870b-3cc4b77eb1e3","Type":"ContainerDied","Data":"858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.199086 4895 generic.go:334] "Generic (PLEG): container finished" podID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" exitCode=0 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.199130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8","Type":"ContainerDied","Data":"095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.201713 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cccbd9be-50fa-413b-bb47-1af68ecdda2d/ovn-northd/0.log" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.201745 4895 generic.go:334] "Generic (PLEG): container finished" podID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" exitCode=139 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.201769 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cccbd9be-50fa-413b-bb47-1af68ecdda2d","Type":"ContainerDied","Data":"c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.297058 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.468573 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kolla-config\") pod \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.469072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-memcached-tls-certs\") pod \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.469160 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zll96\" (UniqueName: \"kubernetes.io/projected/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kube-api-access-zll96\") pod \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.469206 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-combined-ca-bundle\") pod \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.469238 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-config-data\") pod \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\" (UID: \"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6\") " Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.470287 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" (UID: "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.470390 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-config-data" (OuterVolumeSpecName: "config-data") pod "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" (UID: "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.471454 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.471754 4895 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.484372 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kube-api-access-zll96" (OuterVolumeSpecName: "kube-api-access-zll96") pod "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" (UID: "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6"). InnerVolumeSpecName "kube-api-access-zll96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.500734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" (UID: "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.515327 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" (UID: "fba8bc40-d348-4f8f-aeb6-aa2e46d908d6"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.573904 4895 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.573936 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zll96\" (UniqueName: \"kubernetes.io/projected/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-kube-api-access-zll96\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:45.573950 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.184897 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.185013 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data podName:e963d73b-d3f2-4c70-8dbd-687b3fc1962d nodeName:}" failed. No retries permitted until 2025-12-06 07:32:02.184989154 +0000 UTC m=+2084.586378084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data") pod "rabbitmq-cell1-server-0" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d") : configmap "rabbitmq-cell1-config-data" not found Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.216826 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerID="bb9f938ac64e7d7cade7485c2af22ce637c88ad45a88b16d33b845f5c54d44ad" exitCode=0 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.216902 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerDied","Data":"bb9f938ac64e7d7cade7485c2af22ce637c88ad45a88b16d33b845f5c54d44ad"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.219140 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fba8bc40-d348-4f8f-aeb6-aa2e46d908d6","Type":"ContainerDied","Data":"afecd2b271bf0233ba1afba101d50f61eef3adfd809474ed6c74434cf5d0edd5"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.219170 4895 scope.go:117] "RemoveContainer" containerID="9d2fe9419d1d71a0bda4ee42e39fc68443c6fa554ca6ce504dec36d80c892830" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.219350 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.229414 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderdb70-account-delete-lqhcq" event={"ID":"faf28c62-87dc-461a-bf5a-4ae13d62e489","Type":"ContainerStarted","Data":"f5afe00d76e2637d5bb0ccee5cacbb0b0c661099f2511ec7f6983572c9b66fbd"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.232708 4895 generic.go:334] "Generic (PLEG): container finished" podID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerID="c65a6f976e5fa9af5eaf596cfeeff15bdcc75ba1010c01bacc5174bb022b9e1c" exitCode=0 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.232746 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8fa39160-bfb2-49ae-b2ca-12c0e5788996","Type":"ContainerDied","Data":"c65a6f976e5fa9af5eaf596cfeeff15bdcc75ba1010c01bacc5174bb022b9e1c"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.246101 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.252350 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.796594 4895 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.796668 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data podName:8fa39160-bfb2-49ae-b2ca-12c0e5788996 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:02.796648281 +0000 UTC m=+2085.198037151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data") pod "rabbitmq-server-0" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996") : configmap "rabbitmq-config-data" not found Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.840859 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.841089 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.842534 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.846157 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.846326 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.847021 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.847119 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.847695 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.847724 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-cwrlp" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.851969 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.854088 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:46.854163 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:46.963643 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:47.248372 4895 generic.go:334] "Generic (PLEG): container finished" podID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerID="5df099b69b140c2bfc00dcd20dcf9cc910e3fcf8d06e8e51a6f93094d065fa12" exitCode=1 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:47.248432 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican18b2-account-delete-zdlkh" event={"ID":"d5401d7f-627c-410f-ae61-d7653749a7d3","Type":"ContainerDied","Data":"5df099b69b140c2bfc00dcd20dcf9cc910e3fcf8d06e8e51a6f93094d065fa12"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:47.248566 4895 scope.go:117] "RemoveContainer" containerID="c386e4310912ed61dec165e5697f183df34eb6e9e50d37a603ed1727727781c1" Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:47.253800 4895 generic.go:334] "Generic (PLEG): container finished" podID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerID="f5afe00d76e2637d5bb0ccee5cacbb0b0c661099f2511ec7f6983572c9b66fbd" exitCode=1 Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:47.253837 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderdb70-account-delete-lqhcq" event={"ID":"faf28c62-87dc-461a-bf5a-4ae13d62e489","Type":"ContainerDied","Data":"f5afe00d76e2637d5bb0ccee5cacbb0b0c661099f2511ec7f6983572c9b66fbd"} Dec 06 07:31:47 crc kubenswrapper[4895]: I1206 07:31:47.355188 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:47.504728 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49 is running failed: container process not found" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:47.504990 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49 is running failed: container process not found" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:47.505194 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49 is running failed: container process not found" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 06 07:31:47 crc kubenswrapper[4895]: E1206 07:31:47.505218 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:47.762059 4895 scope.go:117] "RemoveContainer" containerID="deb59b1cfce769efc25c95971ca8dcfd676719c828ae598d5652de64a7e5dd44" Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.818292 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.818358 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts podName:d5401d7f-627c-410f-ae61-d7653749a7d3 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.818340741 +0000 UTC m=+2078.219729601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts") pod "barbican18b2-account-delete-zdlkh" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.818449 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.818556 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts podName:faf28c62-87dc-461a-bf5a-4ae13d62e489 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.818535956 +0000 UTC m=+2078.219924826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts") pod "cinderdb70-account-delete-lqhcq" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920346 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920394 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920438 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.920419301 +0000 UTC m=+2078.321808161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920455 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920499 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.920456581 +0000 UTC m=+2078.321845491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920507 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920571 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920522 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.920512333 +0000 UTC m=+2078.321901293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920707 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.920680667 +0000 UTC m=+2078.322069617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:47.920744 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:31:55.920736778 +0000 UTC m=+2078.322125758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.066548 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" path="/var/lib/kubelet/pods/fba8bc40-d348-4f8f-aeb6-aa2e46d908d6/volumes" Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.302921 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.303410 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.303725 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.303764 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.452094 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.456219 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.457504 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.457578 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.729507 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.738818 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745800 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-httpd-run\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745833 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-logs\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745868 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-nova-metadata-tls-certs\") pod \"04293385-9ad8-4686-a3d3-e39d586a7e6f\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745888 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-config-data\") pod \"04293385-9ad8-4686-a3d3-e39d586a7e6f\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745948 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wjvg\" (UniqueName: \"kubernetes.io/projected/d5739e86-0fb8-4368-91ae-f2a09bb9848c-kube-api-access-5wjvg\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.745996 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-combined-ca-bundle\") pod \"04293385-9ad8-4686-a3d3-e39d586a7e6f\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.746012 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-scripts\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.746031 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04293385-9ad8-4686-a3d3-e39d586a7e6f-logs\") pod \"04293385-9ad8-4686-a3d3-e39d586a7e6f\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.746100 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.746137 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vcwb\" (UniqueName: \"kubernetes.io/projected/04293385-9ad8-4686-a3d3-e39d586a7e6f-kube-api-access-5vcwb\") pod \"04293385-9ad8-4686-a3d3-e39d586a7e6f\" (UID: \"04293385-9ad8-4686-a3d3-e39d586a7e6f\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.746163 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-combined-ca-bundle\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.746191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs\") pod \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\" (UID: \"d5739e86-0fb8-4368-91ae-f2a09bb9848c\") " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.748793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.748922 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-logs" (OuterVolumeSpecName: "logs") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.748976 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04293385-9ad8-4686-a3d3-e39d586a7e6f-logs" (OuterVolumeSpecName: "logs") pod "04293385-9ad8-4686-a3d3-e39d586a7e6f" (UID: "04293385-9ad8-4686-a3d3-e39d586a7e6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.761748 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.772734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5739e86-0fb8-4368-91ae-f2a09bb9848c-kube-api-access-5wjvg" (OuterVolumeSpecName: "kube-api-access-5wjvg") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "kube-api-access-5wjvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.791653 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-scripts" (OuterVolumeSpecName: "scripts") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.801812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04293385-9ad8-4686-a3d3-e39d586a7e6f-kube-api-access-5vcwb" (OuterVolumeSpecName: "kube-api-access-5vcwb") pod "04293385-9ad8-4686-a3d3-e39d586a7e6f" (UID: "04293385-9ad8-4686-a3d3-e39d586a7e6f"). InnerVolumeSpecName "kube-api-access-5vcwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.808484 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-config-data" (OuterVolumeSpecName: "config-data") pod "04293385-9ad8-4686-a3d3-e39d586a7e6f" (UID: "04293385-9ad8-4686-a3d3-e39d586a7e6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851246 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04293385-9ad8-4686-a3d3-e39d586a7e6f-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851285 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vcwb\" (UniqueName: \"kubernetes.io/projected/04293385-9ad8-4686-a3d3-e39d586a7e6f-kube-api-access-5vcwb\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851298 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851309 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5739e86-0fb8-4368-91ae-f2a09bb9848c-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851320 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851346 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851356 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wjvg\" (UniqueName: \"kubernetes.io/projected/d5739e86-0fb8-4368-91ae-f2a09bb9848c-kube-api-access-5wjvg\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.851365 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.866972 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.904429 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.907451 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.915115 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 06 07:31:48 crc kubenswrapper[4895]: E1206 07:31:48.915218 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.919816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data" (OuterVolumeSpecName: "config-data") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.921944 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04293385-9ad8-4686-a3d3-e39d586a7e6f" (UID: "04293385-9ad8-4686-a3d3-e39d586a7e6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.931454 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.954391 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.954444 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.954456 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.954484 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:48 crc kubenswrapper[4895]: I1206 07:31:48.984804 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d5739e86-0fb8-4368-91ae-f2a09bb9848c" (UID: "d5739e86-0fb8-4368-91ae-f2a09bb9848c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.012517 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "04293385-9ad8-4686-a3d3-e39d586a7e6f" (UID: "04293385-9ad8-4686-a3d3-e39d586a7e6f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.056247 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5739e86-0fb8-4368-91ae-f2a09bb9848c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.056288 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04293385-9ad8-4686-a3d3-e39d586a7e6f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.113913 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.157174 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts\") pod \"d5401d7f-627c-410f-ae61-d7653749a7d3\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.157274 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbckh\" (UniqueName: \"kubernetes.io/projected/d5401d7f-627c-410f-ae61-d7653749a7d3-kube-api-access-rbckh\") pod \"d5401d7f-627c-410f-ae61-d7653749a7d3\" (UID: \"d5401d7f-627c-410f-ae61-d7653749a7d3\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.159335 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5401d7f-627c-410f-ae61-d7653749a7d3" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.169078 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5401d7f-627c-410f-ae61-d7653749a7d3-kube-api-access-rbckh" (OuterVolumeSpecName: "kube-api-access-rbckh") pod "d5401d7f-627c-410f-ae61-d7653749a7d3" (UID: "d5401d7f-627c-410f-ae61-d7653749a7d3"). InnerVolumeSpecName "kube-api-access-rbckh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.199118 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.206133 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.218036 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.248312 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259346 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-plugins\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-confd\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259493 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-credential-keys\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259620 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-pod-info\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259650 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmcsb\" (UniqueName: \"kubernetes.io/projected/46664967-bc44-4dd5-8fa7-419d1f7741fd-kube-api-access-hmcsb\") pod \"46664967-bc44-4dd5-8fa7-419d1f7741fd\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259684 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-tls\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259745 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-public-tls-certs\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259771 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-combined-ca-bundle\") pod \"46664967-bc44-4dd5-8fa7-419d1f7741fd\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-config-data\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259847 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqb5\" (UniqueName: \"kubernetes.io/projected/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-kube-api-access-njqb5\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd2ks\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-kube-api-access-zd2ks\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259909 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvsh5\" (UniqueName: \"kubernetes.io/projected/53540f68-ae58-4d76-870b-3cc4b77eb1e3-kube-api-access-gvsh5\") pod \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259931 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data\") pod \"46664967-bc44-4dd5-8fa7-419d1f7741fd\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259964 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.259996 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46664967-bc44-4dd5-8fa7-419d1f7741fd-logs\") pod \"46664967-bc44-4dd5-8fa7-419d1f7741fd\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260038 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-plugins-conf\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260057 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-fernet-keys\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260080 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-erlang-cookie\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260119 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data-custom\") pod \"46664967-bc44-4dd5-8fa7-419d1f7741fd\" (UID: \"46664967-bc44-4dd5-8fa7-419d1f7741fd\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260160 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-internal-tls-certs\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260220 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-server-conf\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-combined-ca-bundle\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260276 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-erlang-cookie-secret\") pod \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\" (UID: \"e963d73b-d3f2-4c70-8dbd-687b3fc1962d\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260299 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-scripts\") pod \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\" (UID: \"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260322 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-config-data\") pod \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.260349 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-combined-ca-bundle\") pod \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\" (UID: \"53540f68-ae58-4d76-870b-3cc4b77eb1e3\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.261333 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbckh\" (UniqueName: \"kubernetes.io/projected/d5401d7f-627c-410f-ae61-d7653749a7d3-kube-api-access-rbckh\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.261356 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5401d7f-627c-410f-ae61-d7653749a7d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.263376 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.267915 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-scripts" (OuterVolumeSpecName: "scripts") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.268540 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46664967-bc44-4dd5-8fa7-419d1f7741fd-logs" (OuterVolumeSpecName: "logs") pod "46664967-bc44-4dd5-8fa7-419d1f7741fd" (UID: "46664967-bc44-4dd5-8fa7-419d1f7741fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.282706 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.285577 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.286176 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.289884 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-pod-info" (OuterVolumeSpecName: "pod-info") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.293540 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.299287 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.312959 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.313646 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46664967-bc44-4dd5-8fa7-419d1f7741fd" (UID: "46664967-bc44-4dd5-8fa7-419d1f7741fd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.313685 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.315747 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e963d73b-d3f2-4c70-8dbd-687b3fc1962d","Type":"ContainerDied","Data":"2b011e6c246ba4d129721d6e7f5a32b485b5cc59be1e5282f80fad17c8855132"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.315818 4895 scope.go:117] "RemoveContainer" containerID="86a69fa460c2d02239e4fcca0e82a4cac5dc6968a1de1b5762253021bb623d96" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.316157 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.322085 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.324275 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46664967-bc44-4dd5-8fa7-419d1f7741fd-kube-api-access-hmcsb" (OuterVolumeSpecName: "kube-api-access-hmcsb") pod "46664967-bc44-4dd5-8fa7-419d1f7741fd" (UID: "46664967-bc44-4dd5-8fa7-419d1f7741fd"). InnerVolumeSpecName "kube-api-access-hmcsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.324659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.326948 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-kube-api-access-njqb5" (OuterVolumeSpecName: "kube-api-access-njqb5") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "kube-api-access-njqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.335901 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-kube-api-access-zd2ks" (OuterVolumeSpecName: "kube-api-access-zd2ks") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "kube-api-access-zd2ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.355395 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5739e86-0fb8-4368-91ae-f2a09bb9848c","Type":"ContainerDied","Data":"45496466bd5c7bda08224530009d560688965989a55b34509e9183973ed62f6a"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.355545 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367119 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367161 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-tls\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367182 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-erlang-cookie\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367213 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr2wx\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-kube-api-access-tr2wx\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367563 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367644 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-config-data\") pod \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367672 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-plugins-conf\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367720 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-combined-ca-bundle\") pod \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n6sg\" (UniqueName: \"kubernetes.io/projected/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-kube-api-access-4n6sg\") pod \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\" (UID: \"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367784 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-plugins\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367839 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fa39160-bfb2-49ae-b2ca-12c0e5788996-pod-info\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367882 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-server-conf\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.367916 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-confd\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368008 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fa39160-bfb2-49ae-b2ca-12c0e5788996-erlang-cookie-secret\") pod \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\" (UID: \"8fa39160-bfb2-49ae-b2ca-12c0e5788996\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368379 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqb5\" (UniqueName: \"kubernetes.io/projected/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-kube-api-access-njqb5\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368400 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd2ks\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-kube-api-access-zd2ks\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368420 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368429 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46664967-bc44-4dd5-8fa7-419d1f7741fd-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368438 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368446 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368455 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368464 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368582 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368595 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368609 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368634 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368782 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368799 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmcsb\" (UniqueName: \"kubernetes.io/projected/46664967-bc44-4dd5-8fa7-419d1f7741fd-kube-api-access-hmcsb\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.368808 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.376980 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.377733 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.379007 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53540f68-ae58-4d76-870b-3cc4b77eb1e3-kube-api-access-gvsh5" (OuterVolumeSpecName: "kube-api-access-gvsh5") pod "53540f68-ae58-4d76-870b-3cc4b77eb1e3" (UID: "53540f68-ae58-4d76-870b-3cc4b77eb1e3"). InnerVolumeSpecName "kube-api-access-gvsh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.379444 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.383793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-kube-api-access-tr2wx" (OuterVolumeSpecName: "kube-api-access-tr2wx") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "kube-api-access-tr2wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.384764 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.385354 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8fa39160-bfb2-49ae-b2ca-12c0e5788996-pod-info" (OuterVolumeSpecName: "pod-info") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.385735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.389069 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8fa39160-bfb2-49ae-b2ca-12c0e5788996","Type":"ContainerDied","Data":"3e4d540ae4e537163c24343f17c06d066bffd73daa70d6bb742d75a4b127b246"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.389154 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.406114 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa39160-bfb2-49ae-b2ca-12c0e5788996-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.406268 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-kube-api-access-4n6sg" (OuterVolumeSpecName: "kube-api-access-4n6sg") pod "eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" (UID: "eb9a60a7-bd12-495d-b0c3-feebe0f65bf8"). InnerVolumeSpecName "kube-api-access-4n6sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.409027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eb9a60a7-bd12-495d-b0c3-feebe0f65bf8","Type":"ContainerDied","Data":"edfc2eecd9d867b0e6306a947b9bc9010fa85d4d32675ac6bd6c3a9d131c6de3"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.409291 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.420425 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cwrlp_81e2c836-79af-46e7-8be8-a9b0ffdab060/ovn-controller/0.log" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.420533 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.425672 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698445c967-xk6g2" event={"ID":"46664967-bc44-4dd5-8fa7-419d1f7741fd","Type":"ContainerDied","Data":"df2c2628c8c415822da1079ad92f9bc8f4d64f7fc0fe08c1cdb4c7031f860004"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.425948 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698445c967-xk6g2" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.466380 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.479850 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican18b2-account-delete-zdlkh" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.480603 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican18b2-account-delete-zdlkh" event={"ID":"d5401d7f-627c-410f-ae61-d7653749a7d3","Type":"ContainerDied","Data":"de67e65c38912fb61e33cf94871c7df1f8684a86a409bfd2de2ebde7a739852c"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.494754 4895 scope.go:117] "RemoveContainer" containerID="4bd51d355b0c80b5ee327f7a9d32abed17e794deedf336792232c641ae56041e" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501016 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501055 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n6sg\" (UniqueName: \"kubernetes.io/projected/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-kube-api-access-4n6sg\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501074 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501089 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fa39160-bfb2-49ae-b2ca-12c0e5788996-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501109 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fa39160-bfb2-49ae-b2ca-12c0e5788996-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501152 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501162 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501180 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501190 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr2wx\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-kube-api-access-tr2wx\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.501202 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvsh5\" (UniqueName: \"kubernetes.io/projected/53540f68-ae58-4d76-870b-3cc4b77eb1e3-kube-api-access-gvsh5\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.533690 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.533931 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53540f68-ae58-4d76-870b-3cc4b77eb1e3","Type":"ContainerDied","Data":"ca7341d7bd86a4d53abd5ee468943e8fc74430aadcf0d0bceaeee6c7c8cc3503"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.534040 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.544926 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-config-data" (OuterVolumeSpecName: "config-data") pod "53540f68-ae58-4d76-870b-3cc4b77eb1e3" (UID: "53540f68-ae58-4d76-870b-3cc4b77eb1e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.546357 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data" (OuterVolumeSpecName: "config-data") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.550747 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.552019 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.552020 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04293385-9ad8-4686-a3d3-e39d586a7e6f","Type":"ContainerDied","Data":"2909ed614d60dc31976f3585ee4c482157689a3dfd5c7fcce2758aee85e102e9"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.557196 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b7499d868-f7bk5" event={"ID":"1ac7118f-27ae-4b40-bf45-56fb3f3b60e5","Type":"ContainerDied","Data":"66e1afc5306a7cbe67eb069657aafbd32d1a3854985b14c26af11a8b2a51fd9e"} Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.557304 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b7499d868-f7bk5" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.572378 4895 scope.go:117] "RemoveContainer" containerID="5c1298d7ec7ec06c2816c3c5d51d11ddd9f42ecf5beced52e8e430776e865dc9" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.582995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53540f68-ae58-4d76-870b-3cc4b77eb1e3" (UID: "53540f68-ae58-4d76-870b-3cc4b77eb1e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605423 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605500 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgnpj\" (UniqueName: \"kubernetes.io/projected/fa5d7561-2042-4dcc-8ddc-336475230720-kube-api-access-kgnpj\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605645 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89q6\" (UniqueName: \"kubernetes.io/projected/81e2c836-79af-46e7-8be8-a9b0ffdab060-kube-api-access-s89q6\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605705 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-config-data\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605739 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605778 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-httpd-run\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605822 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-public-tls-certs\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605846 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81e2c836-79af-46e7-8be8-a9b0ffdab060-scripts\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605913 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-ovn-controller-tls-certs\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605931 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-scripts\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605959 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run-ovn\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.605982 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-combined-ca-bundle\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.606015 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-combined-ca-bundle\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.606039 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-logs\") pod \"fa5d7561-2042-4dcc-8ddc-336475230720\" (UID: \"fa5d7561-2042-4dcc-8ddc-336475230720\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.606115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-log-ovn\") pod \"81e2c836-79af-46e7-8be8-a9b0ffdab060\" (UID: \"81e2c836-79af-46e7-8be8-a9b0ffdab060\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.606449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.606507 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run" (OuterVolumeSpecName: "var-run") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.607962 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e2c836-79af-46e7-8be8-a9b0ffdab060-scripts" (OuterVolumeSpecName: "scripts") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.612521 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-logs" (OuterVolumeSpecName: "logs") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.612904 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.614682 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.625657 4895 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.625718 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-logs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.625730 4895 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.625739 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.625749 4895 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81e2c836-79af-46e7-8be8-a9b0ffdab060-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.625761 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa5d7561-2042-4dcc-8ddc-336475230720-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.626212 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.626225 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53540f68-ae58-4d76-870b-3cc4b77eb1e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.626237 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81e2c836-79af-46e7-8be8-a9b0ffdab060-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.652991 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46664967-bc44-4dd5-8fa7-419d1f7741fd" (UID: "46664967-bc44-4dd5-8fa7-419d1f7741fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.653593 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.654278 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.654445 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-scripts" (OuterVolumeSpecName: "scripts") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.657342 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e2c836-79af-46e7-8be8-a9b0ffdab060-kube-api-access-s89q6" (OuterVolumeSpecName: "kube-api-access-s89q6") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "kube-api-access-s89q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.659390 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5d7561-2042-4dcc-8ddc-336475230720-kube-api-access-kgnpj" (OuterVolumeSpecName: "kube-api-access-kgnpj") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "kube-api-access-kgnpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.668142 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.685913 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 06 07:31:49 crc kubenswrapper[4895]: E1206 07:31:49.706616 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:49 crc kubenswrapper[4895]: E1206 07:31:49.709784 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:49 crc kubenswrapper[4895]: E1206 07:31:49.711248 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:49 crc kubenswrapper[4895]: E1206 07:31:49.711289 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.731915 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-log-httpd\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.731985 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-combined-ca-bundle\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732031 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-sg-core-conf-yaml\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732053 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c22fx\" (UniqueName: \"kubernetes.io/projected/faf28c62-87dc-461a-bf5a-4ae13d62e489-kube-api-access-c22fx\") pod \"faf28c62-87dc-461a-bf5a-4ae13d62e489\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732075 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-scripts\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732095 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-ceilometer-tls-certs\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732140 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-config-data\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxbfx\" (UniqueName: \"kubernetes.io/projected/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-kube-api-access-mxbfx\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts\") pod \"faf28c62-87dc-461a-bf5a-4ae13d62e489\" (UID: \"faf28c62-87dc-461a-bf5a-4ae13d62e489\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732269 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-run-httpd\") pod \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\" (UID: \"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970\") " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732525 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732536 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732547 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732565 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732575 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgnpj\" (UniqueName: \"kubernetes.io/projected/fa5d7561-2042-4dcc-8ddc-336475230720-kube-api-access-kgnpj\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.732584 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89q6\" (UniqueName: \"kubernetes.io/projected/81e2c836-79af-46e7-8be8-a9b0ffdab060-kube-api-access-s89q6\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.734005 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faf28c62-87dc-461a-bf5a-4ae13d62e489" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.734096 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.734312 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.740642 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-config-data" (OuterVolumeSpecName: "config-data") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.743080 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-scripts" (OuterVolumeSpecName: "scripts") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.757398 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.757613 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-kube-api-access-mxbfx" (OuterVolumeSpecName: "kube-api-access-mxbfx") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "kube-api-access-mxbfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.778190 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf28c62-87dc-461a-bf5a-4ae13d62e489-kube-api-access-c22fx" (OuterVolumeSpecName: "kube-api-access-c22fx") pod "faf28c62-87dc-461a-bf5a-4ae13d62e489" (UID: "faf28c62-87dc-461a-bf5a-4ae13d62e489"). InnerVolumeSpecName "kube-api-access-c22fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.783685 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-config-data" (OuterVolumeSpecName: "config-data") pod "eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" (UID: "eb9a60a7-bd12-495d-b0c3-feebe0f65bf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.824727 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" (UID: "eb9a60a7-bd12-495d-b0c3-feebe0f65bf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.834693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data" (OuterVolumeSpecName: "config-data") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835036 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835071 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835081 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835091 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c22fx\" (UniqueName: \"kubernetes.io/projected/faf28c62-87dc-461a-bf5a-4ae13d62e489-kube-api-access-c22fx\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835536 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835552 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835561 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835571 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835580 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxbfx\" (UniqueName: \"kubernetes.io/projected/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-kube-api-access-mxbfx\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835590 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf28c62-87dc-461a-bf5a-4ae13d62e489-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.835598 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.849584 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.893681 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.900236 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.910464 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.915321 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-server-conf" (OuterVolumeSpecName: "server-conf") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.921666 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data" (OuterVolumeSpecName: "config-data") pod "46664967-bc44-4dd5-8fa7-419d1f7741fd" (UID: "46664967-bc44-4dd5-8fa7-419d1f7741fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.939939 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46664967-bc44-4dd5-8fa7-419d1f7741fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.939981 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.939993 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.940005 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.940015 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.940025 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.949389 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.960700 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.980926 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:49 crc kubenswrapper[4895]: I1206 07:31:49.987740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" (UID: "1ac7118f-27ae-4b40-bf45-56fb3f3b60e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.014995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-server-conf" (OuterVolumeSpecName: "server-conf") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.017887 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.045278 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.045323 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.045337 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.045348 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fa39160-bfb2-49ae-b2ca-12c0e5788996-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.045359 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.045369 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.062218 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cccbd9be-50fa-413b-bb47-1af68ecdda2d/ovn-northd/0.log" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.062378 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.068458 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-config-data" (OuterVolumeSpecName: "config-data") pod "fa5d7561-2042-4dcc-8ddc-336475230720" (UID: "fa5d7561-2042-4dcc-8ddc-336475230720"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.069184 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8fa39160-bfb2-49ae-b2ca-12c0e5788996" (UID: "8fa39160-bfb2-49ae-b2ca-12c0e5788996"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.097137 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e963d73b-d3f2-4c70-8dbd-687b3fc1962d" (UID: "e963d73b-d3f2-4c70-8dbd-687b3fc1962d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.106590 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" path="/var/lib/kubelet/pods/d5739e86-0fb8-4368-91ae-f2a09bb9848c/volumes" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.108810 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.108892 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "81e2c836-79af-46e7-8be8-a9b0ffdab060" (UID: "81e2c836-79af-46e7-8be8-a9b0ffdab060"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.128694 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.136111 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-config-data" (OuterVolumeSpecName: "config-data") pod "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" (UID: "e7e0fcb8-7c6d-423f-b90e-a0a184fe1970"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148214 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-metrics-certs-tls-certs\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148270 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-northd-tls-certs\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148332 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-rundir\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6bc4\" (UniqueName: \"kubernetes.io/projected/cccbd9be-50fa-413b-bb47-1af68ecdda2d-kube-api-access-l6bc4\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148437 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-scripts\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148509 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-config\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148568 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148797 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.148856 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.152558 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-scripts" (OuterVolumeSpecName: "scripts") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.152680 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-config" (OuterVolumeSpecName: "config") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155523 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155578 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155591 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155604 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5d7561-2042-4dcc-8ddc-336475230720-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155640 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fa39160-bfb2-49ae-b2ca-12c0e5788996-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155652 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155665 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e963d73b-d3f2-4c70-8dbd-687b3fc1962d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155676 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccbd9be-50fa-413b-bb47-1af68ecdda2d-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.155688 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/81e2c836-79af-46e7-8be8-a9b0ffdab060-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.188560 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-676f67bc8f-srz2n" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.181:9696/\": dial tcp 10.217.0.181:9696: connect: connection refused" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.193210 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican18b2-account-delete-zdlkh"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.204630 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccbd9be-50fa-413b-bb47-1af68ecdda2d-kube-api-access-l6bc4" (OuterVolumeSpecName: "kube-api-access-l6bc4") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "kube-api-access-l6bc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.204689 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican18b2-account-delete-zdlkh"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.228006 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-698445c967-xk6g2"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.238281 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.240768 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-698445c967-xk6g2"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.253744 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.256806 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.257284 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle\") pod \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\" (UID: \"cccbd9be-50fa-413b-bb47-1af68ecdda2d\") " Dec 06 07:31:50 crc kubenswrapper[4895]: W1206 07:31:50.257785 4895 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cccbd9be-50fa-413b-bb47-1af68ecdda2d/volumes/kubernetes.io~secret/combined-ca-bundle Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.257912 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.258162 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6bc4\" (UniqueName: \"kubernetes.io/projected/cccbd9be-50fa-413b-bb47-1af68ecdda2d-kube-api-access-l6bc4\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.258447 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.258579 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.263556 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.278998 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.290703 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.303021 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5b7499d868-f7bk5"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.313058 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "cccbd9be-50fa-413b-bb47-1af68ecdda2d" (UID: "cccbd9be-50fa-413b-bb47-1af68ecdda2d"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.313933 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5b7499d868-f7bk5"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.323121 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.334388 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.343772 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.353206 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.360899 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccbd9be-50fa-413b-bb47-1af68ecdda2d-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.570047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7e0fcb8-7c6d-423f-b90e-a0a184fe1970","Type":"ContainerDied","Data":"11765b9ca5a2400328fdc886e0cdc6549f6e06b20e872e0db9a78ad0cc5af704"} Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.570449 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.579252 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.579245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa5d7561-2042-4dcc-8ddc-336475230720","Type":"ContainerDied","Data":"e4b017b40cb9870c53cbc8882248226d312f74744a71419cefab7226316f489d"} Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.592369 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderdb70-account-delete-lqhcq" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.592866 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderdb70-account-delete-lqhcq" event={"ID":"faf28c62-87dc-461a-bf5a-4ae13d62e489","Type":"ContainerDied","Data":"f9840e1f21720d6c72eeb1b347079b8a6d02c54c63afbf722c5c5534b7abd35d"} Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.600493 4895 scope.go:117] "RemoveContainer" containerID="c0d8057c614bf57165265f6274705c1b72d6138ba82d542917a4bf68a493d896" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.605060 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cwrlp_81e2c836-79af-46e7-8be8-a9b0ffdab060/ovn-controller/0.log" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.605250 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cwrlp" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.605307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cwrlp" event={"ID":"81e2c836-79af-46e7-8be8-a9b0ffdab060","Type":"ContainerDied","Data":"7caeda06fed1ebef38d062bd202da4cb174b8733061b87620d3f54ff5648b873"} Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.617275 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cccbd9be-50fa-413b-bb47-1af68ecdda2d/ovn-northd/0.log" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.617343 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cccbd9be-50fa-413b-bb47-1af68ecdda2d","Type":"ContainerDied","Data":"ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d"} Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.617424 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.651809 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderdb70-account-delete-lqhcq"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.707363 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinderdb70-account-delete-lqhcq"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.724486 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.743298 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.756768 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.767082 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.780236 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cwrlp"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.789667 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cwrlp"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.797991 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:31:50 crc kubenswrapper[4895]: I1206 07:31:50.808127 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.818989 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.820314 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.820317 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.822055 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.822178 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.823117 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.825078 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:51 crc kubenswrapper[4895]: E1206 07:31:51.825144 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.062216 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" path="/var/lib/kubelet/pods/04293385-9ad8-4686-a3d3-e39d586a7e6f/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.064124 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" path="/var/lib/kubelet/pods/1ac7118f-27ae-4b40-bf45-56fb3f3b60e5/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.064879 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" path="/var/lib/kubelet/pods/46664967-bc44-4dd5-8fa7-419d1f7741fd/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.066236 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" path="/var/lib/kubelet/pods/53540f68-ae58-4d76-870b-3cc4b77eb1e3/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.073229 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" path="/var/lib/kubelet/pods/81e2c836-79af-46e7-8be8-a9b0ffdab060/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.074193 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" path="/var/lib/kubelet/pods/8fa39160-bfb2-49ae-b2ca-12c0e5788996/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.074928 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" path="/var/lib/kubelet/pods/cccbd9be-50fa-413b-bb47-1af68ecdda2d/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.075559 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" path="/var/lib/kubelet/pods/d5401d7f-627c-410f-ae61-d7653749a7d3/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.076107 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" path="/var/lib/kubelet/pods/e7e0fcb8-7c6d-423f-b90e-a0a184fe1970/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.078168 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" path="/var/lib/kubelet/pods/e963d73b-d3f2-4c70-8dbd-687b3fc1962d/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.078816 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" path="/var/lib/kubelet/pods/eb9a60a7-bd12-495d-b0c3-feebe0f65bf8/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.080040 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" path="/var/lib/kubelet/pods/fa5d7561-2042-4dcc-8ddc-336475230720/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.080720 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" path="/var/lib/kubelet/pods/faf28c62-87dc-461a-bf5a-4ae13d62e489/volumes" Dec 06 07:31:52 crc kubenswrapper[4895]: I1206 07:31:52.614618 4895 scope.go:117] "RemoveContainer" containerID="c65a6f976e5fa9af5eaf596cfeeff15bdcc75ba1010c01bacc5174bb022b9e1c" Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963183 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963710 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:32:11.9636806 +0000 UTC m=+2094.365069490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963266 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963841 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:11.963807843 +0000 UTC m=+2094.365196773 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963300 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963931 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:32:11.963912966 +0000 UTC m=+2094.365301846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963309 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963973 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:11.963964577 +0000 UTC m=+2094.365353457 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.963321 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:31:55 crc kubenswrapper[4895]: E1206 07:31:55.964010 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:11.964002818 +0000 UTC m=+2094.365391698 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.818103 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.818989 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.819550 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.819652 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.819701 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.820655 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.822201 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:31:56 crc kubenswrapper[4895]: E1206 07:31:56.822289 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:31:58 crc kubenswrapper[4895]: E1206 07:31:58.450533 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:58 crc kubenswrapper[4895]: E1206 07:31:58.451893 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:58 crc kubenswrapper[4895]: E1206 07:31:58.453300 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:58 crc kubenswrapper[4895]: E1206 07:31:58.453345 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:31:59 crc kubenswrapper[4895]: I1206 07:31:59.696504 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:31:59 crc kubenswrapper[4895]: I1206 07:31:59.696564 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:31:59 crc kubenswrapper[4895]: I1206 07:31:59.696612 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:31:59 crc kubenswrapper[4895]: I1206 07:31:59.697252 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1afcdb14b177bc99b4d67f898f37ab6806e81e208403fb192e71d61458db3cfa"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:31:59 crc kubenswrapper[4895]: I1206 07:31:59.697306 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://1afcdb14b177bc99b4d67f898f37ab6806e81e208403fb192e71d61458db3cfa" gracePeriod=600 Dec 06 07:31:59 crc kubenswrapper[4895]: E1206 07:31:59.697810 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:59 crc kubenswrapper[4895]: E1206 07:31:59.700496 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:59 crc kubenswrapper[4895]: E1206 07:31:59.702591 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:31:59 crc kubenswrapper[4895]: E1206 07:31:59.702644 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" Dec 06 07:32:00 crc kubenswrapper[4895]: I1206 07:32:00.652317 4895 scope.go:117] "RemoveContainer" containerID="a69a5c75210b542e25ce1c72e591c0e062baf33da7df98918d86673278bac167" Dec 06 07:32:01 crc kubenswrapper[4895]: I1206 07:32:01.741083 4895 generic.go:334] "Generic (PLEG): container finished" podID="275e5518-922b-455d-a5d5-7b072a12ab07" containerID="c164dcb786933c905e3f3e8351f17e2bb2512e11081c2453a5584c61dbfedabc" exitCode=0 Dec 06 07:32:01 crc kubenswrapper[4895]: I1206 07:32:01.741142 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-676f67bc8f-srz2n" event={"ID":"275e5518-922b-455d-a5d5-7b072a12ab07","Type":"ContainerDied","Data":"c164dcb786933c905e3f3e8351f17e2bb2512e11081c2453a5584c61dbfedabc"} Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.817821 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.818320 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.818886 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.818959 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.820250 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.840639 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.843001 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 06 07:32:01 crc kubenswrapper[4895]: E1206 07:32:01.843045 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qnbdj" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.612869 4895 scope.go:117] "RemoveContainer" containerID="095169f5f9fa6c06694e3f9a70e904f671bb5f62da219801747bc88ed69e39fc" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.744732 4895 scope.go:117] "RemoveContainer" containerID="b02063e5faf708de34c2034eb08a56c2da17f968cfa6ea952d0d97dba87a4a30" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.759089 4895 generic.go:334] "Generic (PLEG): container finished" podID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerID="180fd4a822736c6399b31cb1a67003fc90408e00dbbaac62ec926a3d268825ec" exitCode=137 Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.759176 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"180fd4a822736c6399b31cb1a67003fc90408e00dbbaac62ec926a3d268825ec"} Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.761543 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qnbdj_588e7e7b-f1fb-4e68-846a-04c6a23bec39/ovs-vswitchd/0.log" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.762412 4895 generic.go:334] "Generic (PLEG): container finished" podID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" exitCode=137 Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.762527 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerDied","Data":"6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be"} Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.767428 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="1afcdb14b177bc99b4d67f898f37ab6806e81e208403fb192e71d61458db3cfa" exitCode=0 Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.767503 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"1afcdb14b177bc99b4d67f898f37ab6806e81e208403fb192e71d61458db3cfa"} Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.768134 4895 scope.go:117] "RemoveContainer" containerID="9df6c3348da2931d78ecc446fbd4746175039dc755b08b1089a9a7913d9218b1" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.799923 4895 scope.go:117] "RemoveContainer" containerID="5df099b69b140c2bfc00dcd20dcf9cc910e3fcf8d06e8e51a6f93094d065fa12" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.821321 4895 scope.go:117] "RemoveContainer" containerID="858d69c67fcb7cf887899ad453b3fe21cb32c7264ceb76a64a55b5a587145893" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.844830 4895 scope.go:117] "RemoveContainer" containerID="0ba8a7725e3e059cced115d45bb53a3395ed65609a2182433586613416231fdd" Dec 06 07:32:02 crc kubenswrapper[4895]: I1206 07:32:02.897555 4895 scope.go:117] "RemoveContainer" containerID="b777a5a301fd4f77108bae41580433834a4fcb0d280446b3355223fe60884897" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.361848 4895 scope.go:117] "RemoveContainer" containerID="d412c2ee11d8f4f926bebc48cbf18089f618c5e089a94582d14e647031554d1a" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.380024 4895 scope.go:117] "RemoveContainer" containerID="b91b7e983d4a1b355dc874f73fbaa1ce58e3e040d07274d2e14ec839fcb170dc" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.400721 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.434773 4895 scope.go:117] "RemoveContainer" containerID="33d31e9aedcfefe23175b6c6b234217c7c44b103b5a898c80f6cefb0800cc0d3" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.445839 4895 scope.go:117] "RemoveContainer" containerID="b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.463357 4895 scope.go:117] "RemoveContainer" containerID="2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.483064 4895 scope.go:117] "RemoveContainer" containerID="f608b8f2d91f359e66e47a2170b12aa2033c9aa3fd46ebe5c13e759c317a8d0b" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.496712 4895 scope.go:117] "RemoveContainer" containerID="f86240a4c5102f3ed5bdfa5a65cd0b3f6262f647bcf567c98c4795854511e25d" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506501 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-public-tls-certs\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506544 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-ovndb-tls-certs\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-internal-tls-certs\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506631 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-combined-ca-bundle\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506655 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-httpd-config\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-config\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.506733 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtscj\" (UniqueName: \"kubernetes.io/projected/275e5518-922b-455d-a5d5-7b072a12ab07-kube-api-access-jtscj\") pod \"275e5518-922b-455d-a5d5-7b072a12ab07\" (UID: \"275e5518-922b-455d-a5d5-7b072a12ab07\") " Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.510456 4895 scope.go:117] "RemoveContainer" containerID="24e243d683f4d1e457142494c7ae9f7b77e4d37cba6f0178e3fdb24f773dad15" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.513132 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.513672 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275e5518-922b-455d-a5d5-7b072a12ab07-kube-api-access-jtscj" (OuterVolumeSpecName: "kube-api-access-jtscj") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "kube-api-access-jtscj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.543427 4895 scope.go:117] "RemoveContainer" containerID="8a1b5763ec47db56d8a3d960d8777c8db8c3fc314a29bacd5c4abccec0f31148" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.549513 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.553569 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-config" (OuterVolumeSpecName: "config") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.556604 4895 scope.go:117] "RemoveContainer" containerID="bb9f938ac64e7d7cade7485c2af22ce637c88ad45a88b16d33b845f5c54d44ad" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.564381 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.564636 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.588891 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "275e5518-922b-455d-a5d5-7b072a12ab07" (UID: "275e5518-922b-455d-a5d5-7b072a12ab07"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.611446 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.611771 4895 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.611831 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.611888 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.611941 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.611993 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/275e5518-922b-455d-a5d5-7b072a12ab07-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.612052 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtscj\" (UniqueName: \"kubernetes.io/projected/275e5518-922b-455d-a5d5-7b072a12ab07-kube-api-access-jtscj\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.668069 4895 scope.go:117] "RemoveContainer" containerID="f99e8cb999be03cf9e185900bd38c8c7207dd178e7aa039c48d115277b98f6d7" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.693790 4895 scope.go:117] "RemoveContainer" containerID="1aad065597e380c56e43f02c6466794b5700558a3248e33cb15ece6837bdd069" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.708406 4895 scope.go:117] "RemoveContainer" containerID="903a61307cb2aad6db35d5637fc506893147f1633d2d088458f6893942be9522" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.738950 4895 scope.go:117] "RemoveContainer" containerID="5f86f88a0048b09a72677b652eb94fd21ab7d1447989850d3c2a784d667b1b12" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.755635 4895 scope.go:117] "RemoveContainer" containerID="8acd647173ea9348de51669e8f55bf35c74c189bbfe0a82f8badbb80b5baef39" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.799603 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerStarted","Data":"f483625c638253fff9c0e5662e64ae60aea8726178901a1e0ab6019f217e234f"} Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.803633 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cccbd9be-50fa-413b-bb47-1af68ecdda2d/ovn-northd/0.log" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.821726 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-676f67bc8f-srz2n" event={"ID":"275e5518-922b-455d-a5d5-7b072a12ab07","Type":"ContainerDied","Data":"9cff65a3cb292fe14e6571f1dafdab1f6f8cac1b0731b381f0565469386b2c12"} Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.821762 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-676f67bc8f-srz2n" Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.858027 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-676f67bc8f-srz2n"] Dec 06 07:32:03 crc kubenswrapper[4895]: I1206 07:32:03.865762 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-676f67bc8f-srz2n"] Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.072208 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" path="/var/lib/kubelet/pods/275e5518-922b-455d-a5d5-7b072a12ab07/volumes" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.072866 4895 scope.go:117] "RemoveContainer" containerID="98b44f0f1e2a073a15d9551f71f5c355998b03a99d9436ed949aa6d7fed91a0d" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.083327 4895 scope.go:117] "RemoveContainer" containerID="396c5517a5377de34e58194ec2e688e2eb5546de17a7216da1f043b7e210861c" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.114759 4895 scope.go:117] "RemoveContainer" containerID="b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a" Dec 06 07:32:04 crc kubenswrapper[4895]: E1206 07:32:04.115375 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a\": container with ID starting with b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a not found: ID does not exist" containerID="b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a" Dec 06 07:32:04 crc kubenswrapper[4895]: E1206 07:32:04.115413 4895 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a\": rpc error: code = NotFound desc = could not find container \"b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a\": container with ID starting with b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a not found: ID does not exist" containerID="b02b1557f296cc772b516824934a552a670fbd91ea02b44e19950ebf807e862a" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.115436 4895 scope.go:117] "RemoveContainer" containerID="7322288de69173a46c9c5d01fd459b6bd7190e029716431816a2e04cfcdda2fe" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.135012 4895 scope.go:117] "RemoveContainer" containerID="f5afe00d76e2637d5bb0ccee5cacbb0b0c661099f2511ec7f6983572c9b66fbd" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.145463 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qnbdj_588e7e7b-f1fb-4e68-846a-04c6a23bec39/ovs-vswitchd/0.log" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.146969 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.149543 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.167389 4895 scope.go:117] "RemoveContainer" containerID="43dc6067180e3f65623c69b4994dd075ce8e1c72869263fa6d55a6b7dce89050" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.176307 4895 scope.go:117] "RemoveContainer" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-etc-ovs\") pod \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcs95\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-kube-api-access-pcs95\") pod \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321778 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/588e7e7b-f1fb-4e68-846a-04c6a23bec39-scripts\") pod \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-lock\") pod \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321902 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321933 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") pod \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321959 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-run\") pod \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321961 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "588e7e7b-f1fb-4e68-846a-04c6a23bec39" (UID: "588e7e7b-f1fb-4e68-846a-04c6a23bec39"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.321981 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vctn\" (UniqueName: \"kubernetes.io/projected/588e7e7b-f1fb-4e68-846a-04c6a23bec39-kube-api-access-8vctn\") pod \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.322122 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-cache\") pod \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\" (UID: \"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.322152 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-lib\") pod \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.322170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-log\") pod \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\" (UID: \"588e7e7b-f1fb-4e68-846a-04c6a23bec39\") " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.322419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-log" (OuterVolumeSpecName: "var-log") pod "588e7e7b-f1fb-4e68-846a-04c6a23bec39" (UID: "588e7e7b-f1fb-4e68-846a-04c6a23bec39"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.322959 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-cache" (OuterVolumeSpecName: "cache") pod "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.323007 4895 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-log\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.323023 4895 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-etc-ovs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.323049 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-lib" (OuterVolumeSpecName: "var-lib") pod "588e7e7b-f1fb-4e68-846a-04c6a23bec39" (UID: "588e7e7b-f1fb-4e68-846a-04c6a23bec39"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.323430 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-lock" (OuterVolumeSpecName: "lock") pod "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.323510 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-run" (OuterVolumeSpecName: "var-run") pod "588e7e7b-f1fb-4e68-846a-04c6a23bec39" (UID: "588e7e7b-f1fb-4e68-846a-04c6a23bec39"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.323859 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588e7e7b-f1fb-4e68-846a-04c6a23bec39-scripts" (OuterVolumeSpecName: "scripts") pod "588e7e7b-f1fb-4e68-846a-04c6a23bec39" (UID: "588e7e7b-f1fb-4e68-846a-04c6a23bec39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.325371 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.325438 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-kube-api-access-pcs95" (OuterVolumeSpecName: "kube-api-access-pcs95") pod "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42"). InnerVolumeSpecName "kube-api-access-pcs95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.325612 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588e7e7b-f1fb-4e68-846a-04c6a23bec39-kube-api-access-8vctn" (OuterVolumeSpecName: "kube-api-access-8vctn") pod "588e7e7b-f1fb-4e68-846a-04c6a23bec39" (UID: "588e7e7b-f1fb-4e68-846a-04c6a23bec39"). InnerVolumeSpecName "kube-api-access-8vctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.328185 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" (UID: "43a2bfd7-f0c6-4b55-b629-2e11d6b45a42"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424711 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcs95\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-kube-api-access-pcs95\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424753 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/588e7e7b-f1fb-4e68-846a-04c6a23bec39-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424762 4895 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-lock\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424790 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424801 4895 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424810 4895 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424821 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vctn\" (UniqueName: \"kubernetes.io/projected/588e7e7b-f1fb-4e68-846a-04c6a23bec39-kube-api-access-8vctn\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424830 4895 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42-cache\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.424838 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/588e7e7b-f1fb-4e68-846a-04c6a23bec39-var-lib\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.440605 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.526288 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.610711 4895 scope.go:117] "RemoveContainer" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.620919 4895 scope.go:117] "RemoveContainer" containerID="2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632" Dec 06 07:32:04 crc kubenswrapper[4895]: E1206 07:32:04.621451 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632\": container with ID starting with 2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632 not found: ID does not exist" containerID="2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.621503 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632"} err="failed to get container status \"2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632\": rpc error: code = NotFound desc = could not find container \"2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632\": container with ID starting with 2ce6192aa1275c19e07c7558055c8aacdb4300950e03766d48c588da1997c632 not found: ID does not exist" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.621531 4895 scope.go:117] "RemoveContainer" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.728012 4895 scope.go:117] "RemoveContainer" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" Dec 06 07:32:04 crc kubenswrapper[4895]: E1206 07:32:04.728491 4895 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-northd_ovn-northd-0_openstack_cccbd9be-50fa-413b-bb47-1af68ecdda2d_0 in pod sandbox ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d from index: no such id: 'c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49'" containerID="c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.728567 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49"} err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-northd_ovn-northd-0_openstack_cccbd9be-50fa-413b-bb47-1af68ecdda2d_0 in pod sandbox ed4aa267c4cf7e430d22bb6216dca25df5479195c139d204ede2101777a9b21d from index: no such id: 'c51617a844758cecbc273a4aabd1f93b529b9b6b05588863194730aa93dbdd49'" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.728609 4895 scope.go:117] "RemoveContainer" containerID="9e9260070c603a52aa877c9a7ce23031c8d463153ec9176552574b111ef1c115" Dec 06 07:32:04 crc kubenswrapper[4895]: E1206 07:32:04.729754 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475\": container with ID starting with edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 not found: ID does not exist" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" Dec 06 07:32:04 crc kubenswrapper[4895]: E1206 07:32:04.729854 4895 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475\": rpc error: code = NotFound desc = could not find container \"edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475\": container with ID starting with edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475 not found: ID does not exist" containerID="edbf03fee947283200b3ec587a0a4532c864416e6e71357bf6e74d19ac416475" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.729933 4895 scope.go:117] "RemoveContainer" containerID="bdd2cdbec2e42c278fef4643e911a6317453915ea279a580dff8d34441275ca6" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.767766 4895 scope.go:117] "RemoveContainer" containerID="af1f9c0dc7c7f332389b729b0dbd7801804e28f7a5866ffe130f4e337a03960e" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.842242 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerID="f483625c638253fff9c0e5662e64ae60aea8726178901a1e0ab6019f217e234f" exitCode=0 Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.842344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerDied","Data":"f483625c638253fff9c0e5662e64ae60aea8726178901a1e0ab6019f217e234f"} Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.848424 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qnbdj_588e7e7b-f1fb-4e68-846a-04c6a23bec39/ovs-vswitchd/0.log" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.854489 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qnbdj" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.855295 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qnbdj" event={"ID":"588e7e7b-f1fb-4e68-846a-04c6a23bec39","Type":"ContainerDied","Data":"d3669cae0eb9b06d41cf7b5e39ddf41b4ba898dfcaf5121ad003bb69b6722906"} Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.866076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02"} Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.873375 4895 generic.go:334] "Generic (PLEG): container finished" podID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" exitCode=0 Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.873405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5","Type":"ContainerDied","Data":"f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602"} Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.884359 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"43a2bfd7-f0c6-4b55-b629-2e11d6b45a42","Type":"ContainerDied","Data":"ac8270e5055231150a31c4b897240a38a8753257ad1e267cf066326afe6f8ce5"} Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.884544 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.899057 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qnbdj"] Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.905813 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-qnbdj"] Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.938592 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 06 07:32:04 crc kubenswrapper[4895]: I1206 07:32:04.955757 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.006633 4895 scope.go:117] "RemoveContainer" containerID="c86bf6e0b8a7f3b3d6f687082507be81f68d6591166955ceee6b1d2dca543c4d" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.052724 4895 scope.go:117] "RemoveContainer" containerID="b6998f94fefd3637ae3fc950603c2fcc0900a3d69d0e7e9061355de8045469a1" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.074816 4895 scope.go:117] "RemoveContainer" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" Dec 06 07:32:05 crc kubenswrapper[4895]: E1206 07:32:05.076639 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice/crio-ac8270e5055231150a31c4b897240a38a8753257ad1e267cf066326afe6f8ce5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2bfd7_f0c6_4b55_b629_2e11d6b45a42.slice\": RecentStats: unable to find data in memory cache]" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.103057 4895 scope.go:117] "RemoveContainer" containerID="016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.122206 4895 scope.go:117] "RemoveContainer" containerID="10652523d65e949d4742cc50fe660d3d9ed6a9316cf28e1771a8aca093c7773a" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.151697 4895 scope.go:117] "RemoveContainer" containerID="4f8e9ae1388bc7994b5365380a4bd5e84d80b90cafe1780718a2555d9c3d7e69" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.183527 4895 scope.go:117] "RemoveContainer" containerID="e0eab4e7e98f956fc203ece89fc3349c56301eb127b1a7da0e17f47ea8ecc398" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.194058 4895 scope.go:117] "RemoveContainer" containerID="c164dcb786933c905e3f3e8351f17e2bb2512e11081c2453a5584c61dbfedabc" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.224172 4895 scope.go:117] "RemoveContainer" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" Dec 06 07:32:05 crc kubenswrapper[4895]: E1206 07:32:05.224667 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be\": container with ID starting with 6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be not found: ID does not exist" containerID="6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.224701 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be"} err="failed to get container status \"6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be\": rpc error: code = NotFound desc = could not find container \"6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be\": container with ID starting with 6c878df2b65ce98f3847cc3465d9299fcf843dc0c07d8b964b7ab2fdcc0581be not found: ID does not exist" Dec 06 07:32:05 crc kubenswrapper[4895]: I1206 07:32:05.224721 4895 scope.go:117] "RemoveContainer" containerID="84939b0efbb319250fb07c616623dfbdee0c55480b96f224a1cb8e8cb8aa5863" Dec 06 07:32:06 crc kubenswrapper[4895]: I1206 07:32:06.061930 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" path="/var/lib/kubelet/pods/43a2bfd7-f0c6-4b55-b629-2e11d6b45a42/volumes" Dec 06 07:32:06 crc kubenswrapper[4895]: I1206 07:32:06.063957 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" path="/var/lib/kubelet/pods/588e7e7b-f1fb-4e68-846a-04c6a23bec39/volumes" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.298704 4895 scope.go:117] "RemoveContainer" containerID="646c7cf440f5622553df0b11a1660ff28b88b59df2381deaedd5c44b26c3a8a9" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.324025 4895 scope.go:117] "RemoveContainer" containerID="250d7b8ff11407089b4da6523ff53810b8d0ae52221f94509678cf3767a8a85d" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.375612 4895 scope.go:117] "RemoveContainer" containerID="1cc0732ee9960229ad2e7c33a85923ad4eec361fd616faf483d86023b612af30" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.402831 4895 scope.go:117] "RemoveContainer" containerID="016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb" Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.403591 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb\": container with ID starting with 016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb not found: ID does not exist" containerID="016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.403658 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb"} err="failed to get container status \"016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb\": rpc error: code = NotFound desc = could not find container \"016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb\": container with ID starting with 016b3b4ddfb130737f5910cdc3627785db79e92d28ff53c479c8a05d88f0d4bb not found: ID does not exist" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.403693 4895 scope.go:117] "RemoveContainer" containerID="6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.423167 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.459348 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.461106 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.462992 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.463044 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.489442 4895 scope.go:117] "RemoveContainer" containerID="bfb674c6a5da2b3cedce99fd5b05c4ff83c3d7fbbbb50b8baa1146520aa3235d" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500432 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kolla-config\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-generated\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500689 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-operator-scripts\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500778 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xczxs\" (UniqueName: \"kubernetes.io/projected/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kube-api-access-xczxs\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500799 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-combined-ca-bundle\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500841 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-default\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.500863 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-galera-tls-certs\") pod \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\" (UID: \"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5\") " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.501874 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.501953 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.502317 4895 scope.go:117] "RemoveContainer" containerID="b330db62e15f40daaac157cd4c49b8c144883337b31335984bf5592ae231a59c" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.502334 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.502968 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.506829 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kube-api-access-xczxs" (OuterVolumeSpecName: "kube-api-access-xczxs") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "kube-api-access-xczxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.513846 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.517525 4895 scope.go:117] "RemoveContainer" containerID="6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d" Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.519676 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d\": container with ID starting with 6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d not found: ID does not exist" containerID="6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d" Dec 06 07:32:08 crc kubenswrapper[4895]: E1206 07:32:08.519741 4895 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d\": rpc error: code = NotFound desc = could not find container \"6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d\": container with ID starting with 6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d not found: ID does not exist" containerID="6c0595c85ab20846664ac79d1f96e53f167ef4c98a6f2705bd28abc3d10e0b7d" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.527353 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.534863 4895 scope.go:117] "RemoveContainer" containerID="2951f946e28728b5afab411ba269775aef6e893b3da37ae09e4c6ad9b2e2cd1d" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.548203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" (UID: "31a6771e-d46b-42cc-bbca-9d2ddbf24bb5"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.565759 4895 scope.go:117] "RemoveContainer" containerID="180fd4a822736c6399b31cb1a67003fc90408e00dbbaac62ec926a3d268825ec" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.586652 4895 scope.go:117] "RemoveContainer" containerID="12eb91bc2a51f4766807671be1ba08375fb07f1bfcf2b4debe6b359d5cf1ad3a" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602751 4895 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602774 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602805 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602814 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602824 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xczxs\" (UniqueName: \"kubernetes.io/projected/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-kube-api-access-xczxs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602833 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602841 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.602849 4895 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.620150 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.629713 4895 scope.go:117] "RemoveContainer" containerID="468c36ddfa04c0375e91b38b9d03a4849fff2aae471e8fb65d8b36405a987438" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.704378 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.961306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31a6771e-d46b-42cc-bbca-9d2ddbf24bb5","Type":"ContainerDied","Data":"621698424d5fe4a179660d3c7fcacb49d9f383e8e276df690c9ad31e17e8fa8d"} Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.961327 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.994341 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 07:32:08 crc kubenswrapper[4895]: I1206 07:32:08.999592 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.103158 4895 scope.go:117] "RemoveContainer" containerID="fa9d83bbd1bfb2f7ebd0b8374526974f2e013372bc11e48787e237b85890d529" Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.149467 4895 scope.go:117] "RemoveContainer" containerID="9cbd0224b85c8a430882c76ef6c4ea96f23027717b65a3fd1800ebeee11b9ea6" Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.764681 4895 scope.go:117] "RemoveContainer" containerID="a45b30a9ac61253c7662e8944033d9348f16b13301f55a8a8a2040cd78bdd894" Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.791927 4895 scope.go:117] "RemoveContainer" containerID="95dadf2ac42ccfd2a7bccc9ed9a272bfdd8c736e08c731df6f9d73b086d6a880" Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.813427 4895 scope.go:117] "RemoveContainer" containerID="5390e87e60eff5498d3563e5dccff27ede47a6a293471f0a7d9c2ca23354855c" Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.837251 4895 scope.go:117] "RemoveContainer" containerID="3ef3301fb5b94d56ebbbf77fe821db08595a72ce6dc8b57263d0355011539f31" Dec 06 07:32:09 crc kubenswrapper[4895]: I1206 07:32:09.857153 4895 scope.go:117] "RemoveContainer" containerID="d88fdff0da3bb24a30ae1253952bff8962b2fd7e5173dd829fee80c77dc2670f" Dec 06 07:32:10 crc kubenswrapper[4895]: I1206 07:32:10.065058 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" path="/var/lib/kubelet/pods/31a6771e-d46b-42cc-bbca-9d2ddbf24bb5/volumes" Dec 06 07:32:10 crc kubenswrapper[4895]: I1206 07:32:10.133659 4895 scope.go:117] "RemoveContainer" containerID="e031534957b9fccda0363a790f71a039ee246b5fbf68723177270eb631a9658b" Dec 06 07:32:10 crc kubenswrapper[4895]: I1206 07:32:10.170758 4895 scope.go:117] "RemoveContainer" containerID="f907b06f8ee70e6e66e5862860a2218d5683f85f48108d1b129302c31f3a7602" Dec 06 07:32:10 crc kubenswrapper[4895]: I1206 07:32:10.199150 4895 scope.go:117] "RemoveContainer" containerID="b92b460ccf694f6a4184f0124b88195172ac8c58d597a8666b73801a8c04c66e" Dec 06 07:32:12 crc kubenswrapper[4895]: I1206 07:32:12.017273 4895 generic.go:334] "Generic (PLEG): container finished" podID="7d199b21-7519-4bbb-adac-07ad0b1e21d9" containerID="dd8361e49bc19745decbaa9fab4265f3aba5ada82bfa6c483323c5b7995ec623" exitCode=137 Dec 06 07:32:12 crc kubenswrapper[4895]: I1206 07:32:12.017387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance55c9-account-delete-4b8gl" event={"ID":"7d199b21-7519-4bbb-adac-07ad0b1e21d9","Type":"ContainerDied","Data":"dd8361e49bc19745decbaa9fab4265f3aba5ada82bfa6c483323c5b7995ec623"} Dec 06 07:32:12 crc kubenswrapper[4895]: I1206 07:32:12.019930 4895 generic.go:334] "Generic (PLEG): container finished" podID="b6fc6ccb-af32-472e-b0f5-11cb224b4885" containerID="d685d94fb1c3d4e8614fa5f946378b11f947c0609dac3cdffe0954dbfa36810e" exitCode=137 Dec 06 07:32:12 crc kubenswrapper[4895]: I1206 07:32:12.019986 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00ad-account-delete-6xhps" event={"ID":"b6fc6ccb-af32-472e-b0f5-11cb224b4885","Type":"ContainerDied","Data":"d685d94fb1c3d4e8614fa5f946378b11f947c0609dac3cdffe0954dbfa36810e"} Dec 06 07:32:12 crc kubenswrapper[4895]: I1206 07:32:12.022673 4895 generic.go:334] "Generic (PLEG): container finished" podID="a103ad6f-b726-4ad6-9aec-a689a74a4304" containerID="7c315d743ccddde23168adb0dbfeb519c3eeecd7d636fc309c38efb844cb926d" exitCode=137 Dec 06 07:32:12 crc kubenswrapper[4895]: I1206 07:32:12.022710 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc074-account-delete-5dkk6" event={"ID":"a103ad6f-b726-4ad6-9aec-a689a74a4304","Type":"ContainerDied","Data":"7c315d743ccddde23168adb0dbfeb519c3eeecd7d636fc309c38efb844cb926d"} Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057290 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057327 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057343 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057355 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts podName:585dce37-3558-4a0c-8dfb-108c94c1047c nodeName:}" failed. No retries permitted until 2025-12-06 07:32:44.057341644 +0000 UTC m=+2126.458730504 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts") pod "novacell01ea0-account-delete-2sg72" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c") : configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057366 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057385 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts podName:b6fc6ccb-af32-472e-b0f5-11cb224b4885 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:44.057367595 +0000 UTC m=+2126.458756465 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts") pod "placement00ad-account-delete-6xhps" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885") : configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057368 4895 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057403 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts podName:5b854481-cd2a-4938-8b82-3288191b5bbe nodeName:}" failed. No retries permitted until 2025-12-06 07:32:44.057396366 +0000 UTC m=+2126.458785236 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts") pod "novaapif7f2-account-delete-zxqsn" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe") : configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057444 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts podName:7d199b21-7519-4bbb-adac-07ad0b1e21d9 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:44.057432487 +0000 UTC m=+2126.458821357 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts") pod "glance55c9-account-delete-4b8gl" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9") : configmap "openstack-scripts" not found Dec 06 07:32:12 crc kubenswrapper[4895]: E1206 07:32:12.057458 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts podName:a103ad6f-b726-4ad6-9aec-a689a74a4304 nodeName:}" failed. No retries permitted until 2025-12-06 07:32:44.057451307 +0000 UTC m=+2126.458840177 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts") pod "neutronc074-account-delete-5dkk6" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304") : configmap "openstack-scripts" not found Dec 06 07:32:13 crc kubenswrapper[4895]: I1206 07:32:13.037042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4abb614a-de81-4c59-8c5b-27e6761f93c9","Type":"ContainerDied","Data":"ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811"} Dec 06 07:32:13 crc kubenswrapper[4895]: I1206 07:32:13.037193 4895 generic.go:334] "Generic (PLEG): container finished" podID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" exitCode=137 Dec 06 07:32:15 crc kubenswrapper[4895]: E1206 07:32:15.288581 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b854481_cd2a_4938_8b82_3288191b5bbe.slice/crio-conmon-9da3b6aaa51a3296d5d8d42889f5a2f616e05c11f48f276b9f01a0a860f8d302.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.538208 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.711099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8cr\" (UniqueName: \"kubernetes.io/projected/b6fc6ccb-af32-472e-b0f5-11cb224b4885-kube-api-access-sn8cr\") pod \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.711163 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts\") pod \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\" (UID: \"b6fc6ccb-af32-472e-b0f5-11cb224b4885\") " Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.712031 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6fc6ccb-af32-472e-b0f5-11cb224b4885" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.716820 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fc6ccb-af32-472e-b0f5-11cb224b4885-kube-api-access-sn8cr" (OuterVolumeSpecName: "kube-api-access-sn8cr") pod "b6fc6ccb-af32-472e-b0f5-11cb224b4885" (UID: "b6fc6ccb-af32-472e-b0f5-11cb224b4885"). InnerVolumeSpecName "kube-api-access-sn8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.812945 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn8cr\" (UniqueName: \"kubernetes.io/projected/b6fc6ccb-af32-472e-b0f5-11cb224b4885-kube-api-access-sn8cr\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:15 crc kubenswrapper[4895]: I1206 07:32:15.813006 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6fc6ccb-af32-472e-b0f5-11cb224b4885-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.044801 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.070811 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b854481-cd2a-4938-8b82-3288191b5bbe" containerID="9da3b6aaa51a3296d5d8d42889f5a2f616e05c11f48f276b9f01a0a860f8d302" exitCode=137 Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.070888 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapif7f2-account-delete-zxqsn" event={"ID":"5b854481-cd2a-4938-8b82-3288191b5bbe","Type":"ContainerDied","Data":"9da3b6aaa51a3296d5d8d42889f5a2f616e05c11f48f276b9f01a0a860f8d302"} Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.072646 4895 generic.go:334] "Generic (PLEG): container finished" podID="585dce37-3558-4a0c-8dfb-108c94c1047c" containerID="10df986128b44b72d83b806681282da55e82a82f513122e8ea557b6136a9bdb2" exitCode=137 Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.072694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell01ea0-account-delete-2sg72" event={"ID":"585dce37-3558-4a0c-8dfb-108c94c1047c","Type":"ContainerDied","Data":"10df986128b44b72d83b806681282da55e82a82f513122e8ea557b6136a9bdb2"} Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.074373 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance55c9-account-delete-4b8gl" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.074572 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance55c9-account-delete-4b8gl" event={"ID":"7d199b21-7519-4bbb-adac-07ad0b1e21d9","Type":"ContainerDied","Data":"0422bcdaaa315ce661fba9ee62b070d34fa4a4c6a29ab06357af15d623bea5c5"} Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.074603 4895 scope.go:117] "RemoveContainer" containerID="dd8361e49bc19745decbaa9fab4265f3aba5ada82bfa6c483323c5b7995ec623" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.076615 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00ad-account-delete-6xhps" event={"ID":"b6fc6ccb-af32-472e-b0f5-11cb224b4885","Type":"ContainerDied","Data":"0e124448b5f0eed5933c837c60f0c79ffc31bec312f35fa6f6337c0587e199ba"} Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.076685 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement00ad-account-delete-6xhps" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.076943 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.126046 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement00ad-account-delete-6xhps"] Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.133352 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement00ad-account-delete-6xhps"] Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.220145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts\") pod \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.220229 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6gtx\" (UniqueName: \"kubernetes.io/projected/a103ad6f-b726-4ad6-9aec-a689a74a4304-kube-api-access-p6gtx\") pod \"a103ad6f-b726-4ad6-9aec-a689a74a4304\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.220335 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts\") pod \"a103ad6f-b726-4ad6-9aec-a689a74a4304\" (UID: \"a103ad6f-b726-4ad6-9aec-a689a74a4304\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.220401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bmb2\" (UniqueName: \"kubernetes.io/projected/7d199b21-7519-4bbb-adac-07ad0b1e21d9-kube-api-access-9bmb2\") pod \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\" (UID: \"7d199b21-7519-4bbb-adac-07ad0b1e21d9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.220799 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d199b21-7519-4bbb-adac-07ad0b1e21d9" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.221166 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a103ad6f-b726-4ad6-9aec-a689a74a4304" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.222681 4895 scope.go:117] "RemoveContainer" containerID="d685d94fb1c3d4e8614fa5f946378b11f947c0609dac3cdffe0954dbfa36810e" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.224783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a103ad6f-b726-4ad6-9aec-a689a74a4304-kube-api-access-p6gtx" (OuterVolumeSpecName: "kube-api-access-p6gtx") pod "a103ad6f-b726-4ad6-9aec-a689a74a4304" (UID: "a103ad6f-b726-4ad6-9aec-a689a74a4304"). InnerVolumeSpecName "kube-api-access-p6gtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.224846 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d199b21-7519-4bbb-adac-07ad0b1e21d9-kube-api-access-9bmb2" (OuterVolumeSpecName: "kube-api-access-9bmb2") pod "7d199b21-7519-4bbb-adac-07ad0b1e21d9" (UID: "7d199b21-7519-4bbb-adac-07ad0b1e21d9"). InnerVolumeSpecName "kube-api-access-9bmb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.322804 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a103ad6f-b726-4ad6-9aec-a689a74a4304-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.324578 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bmb2\" (UniqueName: \"kubernetes.io/projected/7d199b21-7519-4bbb-adac-07ad0b1e21d9-kube-api-access-9bmb2\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.324628 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d199b21-7519-4bbb-adac-07ad0b1e21d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.324644 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6gtx\" (UniqueName: \"kubernetes.io/projected/a103ad6f-b726-4ad6-9aec-a689a74a4304-kube-api-access-p6gtx\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.410036 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance55c9-account-delete-4b8gl"] Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.414855 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance55c9-account-delete-4b8gl"] Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.456513 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.534283 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts\") pod \"5b854481-cd2a-4938-8b82-3288191b5bbe\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.534348 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k54cs\" (UniqueName: \"kubernetes.io/projected/5b854481-cd2a-4938-8b82-3288191b5bbe-kube-api-access-k54cs\") pod \"5b854481-cd2a-4938-8b82-3288191b5bbe\" (UID: \"5b854481-cd2a-4938-8b82-3288191b5bbe\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.535420 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b854481-cd2a-4938-8b82-3288191b5bbe" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.539533 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b854481-cd2a-4938-8b82-3288191b5bbe-kube-api-access-k54cs" (OuterVolumeSpecName: "kube-api-access-k54cs") pod "5b854481-cd2a-4938-8b82-3288191b5bbe" (UID: "5b854481-cd2a-4938-8b82-3288191b5bbe"). InnerVolumeSpecName "kube-api-access-k54cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.601157 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.635743 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b854481-cd2a-4938-8b82-3288191b5bbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.635781 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k54cs\" (UniqueName: \"kubernetes.io/projected/5b854481-cd2a-4938-8b82-3288191b5bbe-kube-api-access-k54cs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.736873 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lksz9\" (UniqueName: \"kubernetes.io/projected/4abb614a-de81-4c59-8c5b-27e6761f93c9-kube-api-access-lksz9\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737427 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-galera-tls-certs\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737514 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-combined-ca-bundle\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737536 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-default\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-generated\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737720 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737749 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-kolla-config\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.737779 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-operator-scripts\") pod \"4abb614a-de81-4c59-8c5b-27e6761f93c9\" (UID: \"4abb614a-de81-4c59-8c5b-27e6761f93c9\") " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.738765 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.739365 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.739848 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4abb614a-de81-4c59-8c5b-27e6761f93c9-kube-api-access-lksz9" (OuterVolumeSpecName: "kube-api-access-lksz9") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "kube-api-access-lksz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.739866 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.740072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.757026 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.764314 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.793664 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.177:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.793680 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.177:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.809287 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "4abb614a-de81-4c59-8c5b-27e6761f93c9" (UID: "4abb614a-de81-4c59-8c5b-27e6761f93c9"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839846 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839913 4895 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839928 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839941 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lksz9\" (UniqueName: \"kubernetes.io/projected/4abb614a-de81-4c59-8c5b-27e6761f93c9-kube-api-access-lksz9\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839952 4895 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839961 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abb614a-de81-4c59-8c5b-27e6761f93c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839970 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.839979 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4abb614a-de81-4c59-8c5b-27e6761f93c9-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.853279 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 06 07:32:16 crc kubenswrapper[4895]: I1206 07:32:16.943665 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.088696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapif7f2-account-delete-zxqsn" event={"ID":"5b854481-cd2a-4938-8b82-3288191b5bbe","Type":"ContainerDied","Data":"6acf7d6ed2598e556c64b834076a5ee3e03373351f775d6679a6ea81ea3ef2bf"} Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.088790 4895 scope.go:117] "RemoveContainer" containerID="9da3b6aaa51a3296d5d8d42889f5a2f616e05c11f48f276b9f01a0a860f8d302" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.088787 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapif7f2-account-delete-zxqsn" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.092696 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.092676 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4abb614a-de81-4c59-8c5b-27e6761f93c9","Type":"ContainerDied","Data":"c2603757c5c38237952c060bc1ee8fb4b69347282ed65082ce00f1d2840856f5"} Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.094549 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc074-account-delete-5dkk6" event={"ID":"a103ad6f-b726-4ad6-9aec-a689a74a4304","Type":"ContainerDied","Data":"496eee51e515abd32fd065f53520f0f50ed06f94bd48098d3e1c6bbf036c1de0"} Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.094663 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc074-account-delete-5dkk6" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.113041 4895 scope.go:117] "RemoveContainer" containerID="ec6651c03e585a366b7286c6c9b9b5a1379defbc33a3578d889176fd4d166811" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.136522 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.148992 4895 scope.go:117] "RemoveContainer" containerID="c628e12ea50228621f1e41f4485c674a0036ba8ca8c24cb7cfcef246a700dd15" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.150026 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.159945 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapif7f2-account-delete-zxqsn"] Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.166624 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapif7f2-account-delete-zxqsn"] Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.173288 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronc074-account-delete-5dkk6"] Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.176977 4895 scope.go:117] "RemoveContainer" containerID="7c315d743ccddde23168adb0dbfeb519c3eeecd7d636fc309c38efb844cb926d" Dec 06 07:32:17 crc kubenswrapper[4895]: I1206 07:32:17.179045 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutronc074-account-delete-5dkk6"] Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.079931 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" path="/var/lib/kubelet/pods/4abb614a-de81-4c59-8c5b-27e6761f93c9/volumes" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.080582 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b854481-cd2a-4938-8b82-3288191b5bbe" path="/var/lib/kubelet/pods/5b854481-cd2a-4938-8b82-3288191b5bbe/volumes" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.081157 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d199b21-7519-4bbb-adac-07ad0b1e21d9" path="/var/lib/kubelet/pods/7d199b21-7519-4bbb-adac-07ad0b1e21d9/volumes" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.083179 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a103ad6f-b726-4ad6-9aec-a689a74a4304" path="/var/lib/kubelet/pods/a103ad6f-b726-4ad6-9aec-a689a74a4304/volumes" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.083735 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6fc6ccb-af32-472e-b0f5-11cb224b4885" path="/var/lib/kubelet/pods/b6fc6ccb-af32-472e-b0f5-11cb224b4885/volumes" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.431912 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.566450 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts\") pod \"585dce37-3558-4a0c-8dfb-108c94c1047c\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.566594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6hbt\" (UniqueName: \"kubernetes.io/projected/585dce37-3558-4a0c-8dfb-108c94c1047c-kube-api-access-l6hbt\") pod \"585dce37-3558-4a0c-8dfb-108c94c1047c\" (UID: \"585dce37-3558-4a0c-8dfb-108c94c1047c\") " Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.567542 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "585dce37-3558-4a0c-8dfb-108c94c1047c" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.568400 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585dce37-3558-4a0c-8dfb-108c94c1047c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.573703 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585dce37-3558-4a0c-8dfb-108c94c1047c-kube-api-access-l6hbt" (OuterVolumeSpecName: "kube-api-access-l6hbt") pod "585dce37-3558-4a0c-8dfb-108c94c1047c" (UID: "585dce37-3558-4a0c-8dfb-108c94c1047c"). InnerVolumeSpecName "kube-api-access-l6hbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:18 crc kubenswrapper[4895]: I1206 07:32:18.669140 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6hbt\" (UniqueName: \"kubernetes.io/projected/585dce37-3558-4a0c-8dfb-108c94c1047c-kube-api-access-l6hbt\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:19 crc kubenswrapper[4895]: I1206 07:32:19.131079 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell01ea0-account-delete-2sg72" event={"ID":"585dce37-3558-4a0c-8dfb-108c94c1047c","Type":"ContainerDied","Data":"ea0d3ff0841c376836de078a59dba48e7c0869c878e3f280f81880f03e35c791"} Dec 06 07:32:19 crc kubenswrapper[4895]: I1206 07:32:19.131140 4895 scope.go:117] "RemoveContainer" containerID="10df986128b44b72d83b806681282da55e82a82f513122e8ea557b6136a9bdb2" Dec 06 07:32:19 crc kubenswrapper[4895]: I1206 07:32:19.131238 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell01ea0-account-delete-2sg72" Dec 06 07:32:19 crc kubenswrapper[4895]: I1206 07:32:19.166706 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell01ea0-account-delete-2sg72"] Dec 06 07:32:19 crc kubenswrapper[4895]: I1206 07:32:19.173386 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell01ea0-account-delete-2sg72"] Dec 06 07:32:20 crc kubenswrapper[4895]: I1206 07:32:20.060951 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585dce37-3558-4a0c-8dfb-108c94c1047c" path="/var/lib/kubelet/pods/585dce37-3558-4a0c-8dfb-108c94c1047c/volumes" Dec 06 07:32:36 crc kubenswrapper[4895]: I1206 07:32:36.298817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerStarted","Data":"b1745ff1f82f29068c209ea221411ddf109ae51948814297e1ee7b7297fa4bc2"} Dec 06 07:32:37 crc kubenswrapper[4895]: I1206 07:32:37.334032 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q82ps" podStartSLOduration=7.065107946 podStartE2EDuration="1m2.33401355s" podCreationTimestamp="2025-12-06 07:31:35 +0000 UTC" firstStartedPulling="2025-12-06 07:31:39.998917217 +0000 UTC m=+2062.400306087" lastFinishedPulling="2025-12-06 07:32:35.267822831 +0000 UTC m=+2117.669211691" observedRunningTime="2025-12-06 07:32:37.330800477 +0000 UTC m=+2119.732189357" watchObservedRunningTime="2025-12-06 07:32:37.33401355 +0000 UTC m=+2119.735402420" Dec 06 07:32:45 crc kubenswrapper[4895]: I1206 07:32:45.637511 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:32:45 crc kubenswrapper[4895]: I1206 07:32:45.638362 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:32:45 crc kubenswrapper[4895]: I1206 07:32:45.718862 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:32:46 crc kubenswrapper[4895]: I1206 07:32:46.419787 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:32:46 crc kubenswrapper[4895]: I1206 07:32:46.466314 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q82ps"] Dec 06 07:32:48 crc kubenswrapper[4895]: I1206 07:32:48.400730 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q82ps" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="registry-server" containerID="cri-o://b1745ff1f82f29068c209ea221411ddf109ae51948814297e1ee7b7297fa4bc2" gracePeriod=2 Dec 06 07:32:50 crc kubenswrapper[4895]: I1206 07:32:50.420608 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerID="b1745ff1f82f29068c209ea221411ddf109ae51948814297e1ee7b7297fa4bc2" exitCode=0 Dec 06 07:32:50 crc kubenswrapper[4895]: I1206 07:32:50.420630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerDied","Data":"b1745ff1f82f29068c209ea221411ddf109ae51948814297e1ee7b7297fa4bc2"} Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.180089 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.272738 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-catalog-content\") pod \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.273290 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-utilities\") pod \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.273359 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t45cj\" (UniqueName: \"kubernetes.io/projected/a4b2fb89-5631-493f-9afe-51e41f81bdd2-kube-api-access-t45cj\") pod \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\" (UID: \"a4b2fb89-5631-493f-9afe-51e41f81bdd2\") " Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.274301 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-utilities" (OuterVolumeSpecName: "utilities") pod "a4b2fb89-5631-493f-9afe-51e41f81bdd2" (UID: "a4b2fb89-5631-493f-9afe-51e41f81bdd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.281280 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b2fb89-5631-493f-9afe-51e41f81bdd2-kube-api-access-t45cj" (OuterVolumeSpecName: "kube-api-access-t45cj") pod "a4b2fb89-5631-493f-9afe-51e41f81bdd2" (UID: "a4b2fb89-5631-493f-9afe-51e41f81bdd2"). InnerVolumeSpecName "kube-api-access-t45cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.294460 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4b2fb89-5631-493f-9afe-51e41f81bdd2" (UID: "a4b2fb89-5631-493f-9afe-51e41f81bdd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.375160 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.375194 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b2fb89-5631-493f-9afe-51e41f81bdd2-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.375203 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t45cj\" (UniqueName: \"kubernetes.io/projected/a4b2fb89-5631-493f-9afe-51e41f81bdd2-kube-api-access-t45cj\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.434148 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q82ps" event={"ID":"a4b2fb89-5631-493f-9afe-51e41f81bdd2","Type":"ContainerDied","Data":"0ce4cddddf67740dae23606c52b92ef8c772ca700defed855eab9bafe621fbf9"} Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.434204 4895 scope.go:117] "RemoveContainer" containerID="b1745ff1f82f29068c209ea221411ddf109ae51948814297e1ee7b7297fa4bc2" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.434219 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q82ps" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.461660 4895 scope.go:117] "RemoveContainer" containerID="f483625c638253fff9c0e5662e64ae60aea8726178901a1e0ab6019f217e234f" Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.477331 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q82ps"] Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.486634 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q82ps"] Dec 06 07:32:51 crc kubenswrapper[4895]: I1206 07:32:51.497668 4895 scope.go:117] "RemoveContainer" containerID="d05aa72fcf022c23779d886143de215b691ee850f2b0cbebda35bb5a7d8b8e59" Dec 06 07:32:52 crc kubenswrapper[4895]: I1206 07:32:52.060717 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" path="/var/lib/kubelet/pods/a4b2fb89-5631-493f-9afe-51e41f81bdd2/volumes" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.005200 4895 scope.go:117] "RemoveContainer" containerID="b7cee3836c8818bab967eb0ec3d34659eb88c58ccc8fe7684f9b435c78fc9799" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.038990 4895 scope.go:117] "RemoveContainer" containerID="397c7d40c84d73a555670d7ed3f53e1a02ac92739a5de17f75e1fae47255a519" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.066035 4895 scope.go:117] "RemoveContainer" containerID="3ac20753e54c465eb6b6f8c1c10daf7d4a84a3bfb7673fb4687c12d0f81745ae" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.096323 4895 scope.go:117] "RemoveContainer" containerID="24e68144ea0002544c8fad2dbb864f3c9770ec21bd3a4a84fa07c7a03cadecb8" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.113051 4895 scope.go:117] "RemoveContainer" containerID="12ba578dc91e0be4f30c486027c7b59eb3488b7699264c9ec79d472e1fe47671" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.139058 4895 scope.go:117] "RemoveContainer" containerID="c91ea1e35e44cdff3019fc5353d6467a90b4b06b9fa2db2fe0b8a87c56043ab5" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.161823 4895 scope.go:117] "RemoveContainer" containerID="bf1680a564e39a2f8114136574b32fb4a7481bf890a4653e7cd26f7fcd065e6b" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.184295 4895 scope.go:117] "RemoveContainer" containerID="f570d031351040cb2fe03dc3851e1c34de085a50efeef7dec9fb4b7808929814" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.253906 4895 scope.go:117] "RemoveContainer" containerID="a217507f1b7892189cb3a36cc06623b313c5a0733e526b91471aa615aa818384" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.278760 4895 scope.go:117] "RemoveContainer" containerID="0c6d6ffb9d69585f1ab2801d397401bee11daad79a6a1f0f4af67e18c69a321f" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.300979 4895 scope.go:117] "RemoveContainer" containerID="206ce64428b3322b057949fbf79e35f8ecf1a3997fd19513309ae7d4151a96b1" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.322220 4895 scope.go:117] "RemoveContainer" containerID="292daefa98d3df02d99150321d8e332021999a759f4f93641a98fd9843975bb0" Dec 06 07:33:11 crc kubenswrapper[4895]: I1206 07:33:11.340892 4895 scope.go:117] "RemoveContainer" containerID="67972da996c538e53cbe0e9fcfd03a8b37dc808fd647f57c5aad61c4e1cd181a" Dec 06 07:34:29 crc kubenswrapper[4895]: I1206 07:34:29.696266 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:34:29 crc kubenswrapper[4895]: I1206 07:34:29.696945 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:34:59 crc kubenswrapper[4895]: I1206 07:34:59.696000 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:34:59 crc kubenswrapper[4895]: I1206 07:34:59.696711 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:35:29 crc kubenswrapper[4895]: I1206 07:35:29.696405 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:35:29 crc kubenswrapper[4895]: I1206 07:35:29.697017 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:35:29 crc kubenswrapper[4895]: I1206 07:35:29.697068 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:35:29 crc kubenswrapper[4895]: I1206 07:35:29.697791 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:35:29 crc kubenswrapper[4895]: I1206 07:35:29.697865 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" gracePeriod=600 Dec 06 07:35:31 crc kubenswrapper[4895]: E1206 07:35:31.078303 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:35:31 crc kubenswrapper[4895]: I1206 07:35:31.293376 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" exitCode=0 Dec 06 07:35:31 crc kubenswrapper[4895]: I1206 07:35:31.293419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02"} Dec 06 07:35:31 crc kubenswrapper[4895]: I1206 07:35:31.293452 4895 scope.go:117] "RemoveContainer" containerID="1afcdb14b177bc99b4d67f898f37ab6806e81e208403fb192e71d61458db3cfa" Dec 06 07:35:31 crc kubenswrapper[4895]: I1206 07:35:31.294109 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:35:31 crc kubenswrapper[4895]: E1206 07:35:31.294412 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:35:45 crc kubenswrapper[4895]: I1206 07:35:45.051371 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:35:45 crc kubenswrapper[4895]: E1206 07:35:45.056191 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:35:56 crc kubenswrapper[4895]: I1206 07:35:56.053334 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:35:56 crc kubenswrapper[4895]: E1206 07:35:56.055300 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:36:08 crc kubenswrapper[4895]: I1206 07:36:08.061792 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:36:08 crc kubenswrapper[4895]: E1206 07:36:08.062997 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.624658 4895 scope.go:117] "RemoveContainer" containerID="140b2e29bd8d69af6c9e4cdff4cf64a89681dd2e9b425d6a583968de0bca4fd3" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.687000 4895 scope.go:117] "RemoveContainer" containerID="6d8f3f62634430e83f765a55c482bfd6b56e3df61d8617990a8f46860d0e2b70" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.752367 4895 scope.go:117] "RemoveContainer" containerID="5eadfe2764d050fe0186bb34d4cc39ceb232a5ce908d1aa67d320f256d70a84d" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.775539 4895 scope.go:117] "RemoveContainer" containerID="4f541993755a72606ca0f59b0dea7ba63171b12d8882d47758aa0e8359d77d56" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.797226 4895 scope.go:117] "RemoveContainer" containerID="42ebd0184d953a80833cf3b4844a18b03e9c8b9035f897a4855f8171e3468533" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.816992 4895 scope.go:117] "RemoveContainer" containerID="613a73c4e6e23364cc9af04ec4bd246e0f2658fbe2ac03e2783159ec09c7e85e" Dec 06 07:36:11 crc kubenswrapper[4895]: I1206 07:36:11.844957 4895 scope.go:117] "RemoveContainer" containerID="9db93c1e996e3ccb6b5059dbab5a7012a493a6270eba5dec25039d70251334b2" Dec 06 07:36:23 crc kubenswrapper[4895]: I1206 07:36:23.051090 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:36:23 crc kubenswrapper[4895]: E1206 07:36:23.053389 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.679273 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrgkd"] Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680210 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a103ad6f-b726-4ad6-9aec-a689a74a4304" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680229 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a103ad6f-b726-4ad6-9aec-a689a74a4304" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680274 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="extract-content" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680284 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="extract-content" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680295 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="openstack-network-exporter" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680303 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="openstack-network-exporter" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680316 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680324 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680336 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680343 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680360 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerName="memcached" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680368 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerName="memcached" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680377 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680384 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680392 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680402 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680421 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680431 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680440 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="mysql-bootstrap" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680450 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="mysql-bootstrap" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680459 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b854481-cd2a-4938-8b82-3288191b5bbe" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680467 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b854481-cd2a-4938-8b82-3288191b5bbe" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680503 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="rsync" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680529 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="rsync" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680543 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="setup-container" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680551 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="setup-container" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680565 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680572 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680588 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549e9969-79a0-45d9-a093-0b58ad1bc359" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680596 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="549e9969-79a0-45d9-a093-0b58ad1bc359" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680610 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680618 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680629 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="rabbitmq" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680637 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="rabbitmq" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680652 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680661 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680679 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-central-agent" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680688 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-central-agent" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680697 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680705 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680716 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680723 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680735 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b21f280-7879-43c2-b1b0-92906707b4cd" containerName="kube-state-metrics" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680745 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b21f280-7879-43c2-b1b0-92906707b4cd" containerName="kube-state-metrics" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680753 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="sg-core" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680760 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="sg-core" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680773 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680781 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680791 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680799 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680807 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680815 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680824 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680835 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.680851 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="registry-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.680859 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="registry-server" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681372 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d199b21-7519-4bbb-adac-07ad0b1e21d9" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681388 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d199b21-7519-4bbb-adac-07ad0b1e21d9" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681404 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="extract-utilities" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681412 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="extract-utilities" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681425 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681432 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681443 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fc6ccb-af32-472e-b0f5-11cb224b4885" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681452 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fc6ccb-af32-472e-b0f5-11cb224b4885" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681466 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-reaper" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681497 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-reaper" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681507 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerName="nova-cell1-conductor-conductor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681516 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerName="nova-cell1-conductor-conductor" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681525 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681534 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681545 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681554 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681570 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" containerName="keystone-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681578 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" containerName="keystone-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681589 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681599 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681610 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585dce37-3558-4a0c-8dfb-108c94c1047c" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681618 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="585dce37-3558-4a0c-8dfb-108c94c1047c" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681632 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681638 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681648 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681656 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681665 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681673 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681704 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681712 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681725 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681732 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681743 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681751 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681762 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681770 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681783 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681791 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681804 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="rabbitmq" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681812 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="rabbitmq" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681833 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681847 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="openstack-network-exporter" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681856 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="openstack-network-exporter" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681867 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681876 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-server" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681884 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681891 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-server" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681904 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681912 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-server" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681926 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="mysql-bootstrap" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681935 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="mysql-bootstrap" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681945 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681954 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681966 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-notification-agent" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681975 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-notification-agent" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.681985 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.681992 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682006 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682015 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682029 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-updater" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682036 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-updater" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682049 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682057 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682070 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="setup-container" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682077 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="setup-container" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682087 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682095 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682106 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server-init" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682113 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server-init" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682124 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="ovsdbserver-nb" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682131 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="ovsdbserver-nb" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682146 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682153 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682167 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="proxy-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682174 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="proxy-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682185 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="probe" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682193 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="probe" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682204 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682212 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682225 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682233 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682248 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-updater" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682255 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-updater" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682284 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682292 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-server" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682304 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="swift-recon-cron" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682312 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="swift-recon-cron" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682325 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682332 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682345 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682352 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682361 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682369 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-log" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682380 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682388 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.682396 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-expirer" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682404 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-expirer" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682685 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f761e3-7f6a-4c1b-b41d-32a14558a756" containerName="nova-cell1-conductor-conductor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682700 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-expirer" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682711 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682721 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682736 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682748 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682762 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682775 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="cinder-scheduler" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682788 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682795 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682807 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682816 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682826 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682842 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6fc6ccb-af32-472e-b0f5-11cb224b4885" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682850 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="24216683-0ca8-44dd-8bfa-a7d0a84cf3cc" containerName="barbican-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682859 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682870 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8969e2c-9cc0-40a6-8fee-65d93a9856b0" containerName="probe" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682879 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-central-agent" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682889 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a6771e-d46b-42cc-bbca-9d2ddbf24bb5" containerName="galera" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682901 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682913 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682924 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682934 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9a60a7-bd12-495d-b0c3-feebe0f65bf8" containerName="nova-cell0-conductor-conductor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682942 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682951 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="549e9969-79a0-45d9-a093-0b58ad1bc359" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682963 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682974 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="ovn-northd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682983 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.682993 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d199b21-7519-4bbb-adac-07ad0b1e21d9" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683008 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-reaper" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683020 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683028 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="rsync" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683038 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e55858-e444-489b-b573-aae00aa71f9b" containerName="placement-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683051 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b2fb89-5631-493f-9afe-51e41f81bdd2" containerName="registry-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683061 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="openstack-network-exporter" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683070 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="ceilometer-notification-agent" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683083 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="04293385-9ad8-4686-a3d3-e39d586a7e6f" containerName="nova-metadata-metadata" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683094 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef82fc5d-2e7c-4fa5-a7f8-b8cf6311d124" containerName="nova-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683107 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b854481-cd2a-4938-8b82-3288191b5bbe" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683116 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac7118f-27ae-4b40-bf45-56fb3f3b60e5" containerName="keystone-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683130 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e95af5-a2ad-42ee-83a9-25cef915d0dc" containerName="proxy-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683144 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovsdb-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683153 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="swift-recon-cron" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683162 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683171 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0921ccd3-f346-46b9-88af-e165de8ff32b" containerName="barbican-keystone-listener" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683180 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="46664967-bc44-4dd5-8fa7-419d1f7741fd" containerName="barbican-worker" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683190 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a103ad6f-b726-4ad6-9aec-a689a74a4304" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683202 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e2c836-79af-46e7-8be8-a9b0ffdab060" containerName="ovn-controller" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683216 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa39160-bfb2-49ae-b2ca-12c0e5788996" containerName="rabbitmq" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683230 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-server" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683239 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="sg-core" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683248 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="585dce37-3558-4a0c-8dfb-108c94c1047c" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683257 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="53540f68-ae58-4d76-870b-3cc4b77eb1e3" containerName="nova-scheduler-scheduler" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683268 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="object-updater" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683280 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b21f280-7879-43c2-b1b0-92906707b4cd" containerName="kube-state-metrics" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683292 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abb614a-de81-4c59-8c5b-27e6761f93c9" containerName="galera" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683303 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-updater" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683311 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5d7561-2042-4dcc-8ddc-336475230720" containerName="glance-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683319 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="588e7e7b-f1fb-4e68-846a-04c6a23bec39" containerName="ovs-vswitchd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683332 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e963d73b-d3f2-4c70-8dbd-687b3fc1962d" containerName="rabbitmq" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683342 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccbd9be-50fa-413b-bb47-1af68ecdda2d" containerName="openstack-network-exporter" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683363 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba8bc40-d348-4f8f-aeb6-aa2e46d908d6" containerName="memcached" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683376 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec28d57e-8ecf-4415-b18f-69bfa0514187" containerName="ovsdbserver-nb" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683383 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e0fcb8-7c6d-423f-b90e-a0a184fe1970" containerName="proxy-httpd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683393 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="container-auditor" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683405 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5739e86-0fb8-4368-91ae-f2a09bb9848c" containerName="glance-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683418 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2bfd7-f0c6-4b55-b629-2e11d6b45a42" containerName="account-replicator" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683429 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="275e5518-922b-455d-a5d5-7b072a12ab07" containerName="neutron-api" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683438 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api-log" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683451 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d7b5bf-eae3-4832-b13b-be5f0734e4bb" containerName="cinder-api" Dec 06 07:36:26 crc kubenswrapper[4895]: E1206 07:36:26.683698 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683712 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5401d7f-627c-410f-ae61-d7653749a7d3" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.683916 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf28c62-87dc-461a-bf5a-4ae13d62e489" containerName="mariadb-account-delete" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.684929 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.694187 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrgkd"] Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.716955 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-catalog-content\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.717098 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-utilities\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.717178 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj8ch\" (UniqueName: \"kubernetes.io/projected/3807546a-8e12-47a9-a86f-9ec2a78a959d-kube-api-access-pj8ch\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.817807 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-catalog-content\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.817920 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-utilities\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.817983 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj8ch\" (UniqueName: \"kubernetes.io/projected/3807546a-8e12-47a9-a86f-9ec2a78a959d-kube-api-access-pj8ch\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.818916 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-catalog-content\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.819021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-utilities\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:26 crc kubenswrapper[4895]: I1206 07:36:26.852685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj8ch\" (UniqueName: \"kubernetes.io/projected/3807546a-8e12-47a9-a86f-9ec2a78a959d-kube-api-access-pj8ch\") pod \"redhat-operators-jrgkd\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:27 crc kubenswrapper[4895]: I1206 07:36:27.045117 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:27 crc kubenswrapper[4895]: I1206 07:36:27.522144 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrgkd"] Dec 06 07:36:27 crc kubenswrapper[4895]: I1206 07:36:27.840572 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerStarted","Data":"043b715b986fae8f57fa947ce74ec18c2ef85fdf76ecfe48b443481aec5594a8"} Dec 06 07:36:30 crc kubenswrapper[4895]: I1206 07:36:30.867575 4895 generic.go:334] "Generic (PLEG): container finished" podID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerID="d428d1b7a24470043ebd08aabe5468fe4148ea1850caa953c6de01ceb03efac5" exitCode=0 Dec 06 07:36:30 crc kubenswrapper[4895]: I1206 07:36:30.867831 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerDied","Data":"d428d1b7a24470043ebd08aabe5468fe4148ea1850caa953c6de01ceb03efac5"} Dec 06 07:36:36 crc kubenswrapper[4895]: I1206 07:36:36.918747 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerStarted","Data":"47fdc68bff24705ea25446a516739386d634d8d7cc27939a4f811f44a2551360"} Dec 06 07:36:37 crc kubenswrapper[4895]: I1206 07:36:37.928924 4895 generic.go:334] "Generic (PLEG): container finished" podID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerID="47fdc68bff24705ea25446a516739386d634d8d7cc27939a4f811f44a2551360" exitCode=0 Dec 06 07:36:37 crc kubenswrapper[4895]: I1206 07:36:37.928989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerDied","Data":"47fdc68bff24705ea25446a516739386d634d8d7cc27939a4f811f44a2551360"} Dec 06 07:36:38 crc kubenswrapper[4895]: I1206 07:36:38.055434 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:36:38 crc kubenswrapper[4895]: E1206 07:36:38.055691 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:36:38 crc kubenswrapper[4895]: I1206 07:36:38.943547 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerStarted","Data":"f45e960222f0a7b577a81570b1292e671105a59c89f2e3665c41cf9ee34c9102"} Dec 06 07:36:38 crc kubenswrapper[4895]: I1206 07:36:38.976412 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrgkd" podStartSLOduration=6.30477353 podStartE2EDuration="12.976393877s" podCreationTimestamp="2025-12-06 07:36:26 +0000 UTC" firstStartedPulling="2025-12-06 07:36:31.87833984 +0000 UTC m=+2354.279728710" lastFinishedPulling="2025-12-06 07:36:38.549960187 +0000 UTC m=+2360.951349057" observedRunningTime="2025-12-06 07:36:38.97206686 +0000 UTC m=+2361.373455740" watchObservedRunningTime="2025-12-06 07:36:38.976393877 +0000 UTC m=+2361.377782737" Dec 06 07:36:47 crc kubenswrapper[4895]: I1206 07:36:47.046029 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:47 crc kubenswrapper[4895]: I1206 07:36:47.048031 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:47 crc kubenswrapper[4895]: I1206 07:36:47.093454 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:48 crc kubenswrapper[4895]: I1206 07:36:48.072237 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:51 crc kubenswrapper[4895]: I1206 07:36:48.119807 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrgkd"] Dec 06 07:36:51 crc kubenswrapper[4895]: I1206 07:36:50.047974 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrgkd" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="registry-server" containerID="cri-o://f45e960222f0a7b577a81570b1292e671105a59c89f2e3665c41cf9ee34c9102" gracePeriod=2 Dec 06 07:36:51 crc kubenswrapper[4895]: I1206 07:36:50.050832 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:36:51 crc kubenswrapper[4895]: E1206 07:36:50.051139 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.066366 4895 generic.go:334] "Generic (PLEG): container finished" podID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerID="f45e960222f0a7b577a81570b1292e671105a59c89f2e3665c41cf9ee34c9102" exitCode=0 Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.066539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerDied","Data":"f45e960222f0a7b577a81570b1292e671105a59c89f2e3665c41cf9ee34c9102"} Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.691366 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.742534 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-utilities\") pod \"3807546a-8e12-47a9-a86f-9ec2a78a959d\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.742620 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-catalog-content\") pod \"3807546a-8e12-47a9-a86f-9ec2a78a959d\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.742809 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj8ch\" (UniqueName: \"kubernetes.io/projected/3807546a-8e12-47a9-a86f-9ec2a78a959d-kube-api-access-pj8ch\") pod \"3807546a-8e12-47a9-a86f-9ec2a78a959d\" (UID: \"3807546a-8e12-47a9-a86f-9ec2a78a959d\") " Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.744042 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-utilities" (OuterVolumeSpecName: "utilities") pod "3807546a-8e12-47a9-a86f-9ec2a78a959d" (UID: "3807546a-8e12-47a9-a86f-9ec2a78a959d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.750746 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3807546a-8e12-47a9-a86f-9ec2a78a959d-kube-api-access-pj8ch" (OuterVolumeSpecName: "kube-api-access-pj8ch") pod "3807546a-8e12-47a9-a86f-9ec2a78a959d" (UID: "3807546a-8e12-47a9-a86f-9ec2a78a959d"). InnerVolumeSpecName "kube-api-access-pj8ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.844568 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.844615 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj8ch\" (UniqueName: \"kubernetes.io/projected/3807546a-8e12-47a9-a86f-9ec2a78a959d-kube-api-access-pj8ch\") on node \"crc\" DevicePath \"\"" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.875409 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3807546a-8e12-47a9-a86f-9ec2a78a959d" (UID: "3807546a-8e12-47a9-a86f-9ec2a78a959d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:36:52 crc kubenswrapper[4895]: I1206 07:36:52.946399 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807546a-8e12-47a9-a86f-9ec2a78a959d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.077998 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrgkd" event={"ID":"3807546a-8e12-47a9-a86f-9ec2a78a959d","Type":"ContainerDied","Data":"043b715b986fae8f57fa947ce74ec18c2ef85fdf76ecfe48b443481aec5594a8"} Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.078178 4895 scope.go:117] "RemoveContainer" containerID="f45e960222f0a7b577a81570b1292e671105a59c89f2e3665c41cf9ee34c9102" Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.078701 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrgkd" Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.102323 4895 scope.go:117] "RemoveContainer" containerID="47fdc68bff24705ea25446a516739386d634d8d7cc27939a4f811f44a2551360" Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.121758 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrgkd"] Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.133455 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrgkd"] Dec 06 07:36:53 crc kubenswrapper[4895]: I1206 07:36:53.136666 4895 scope.go:117] "RemoveContainer" containerID="d428d1b7a24470043ebd08aabe5468fe4148ea1850caa953c6de01ceb03efac5" Dec 06 07:36:54 crc kubenswrapper[4895]: I1206 07:36:54.064129 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" path="/var/lib/kubelet/pods/3807546a-8e12-47a9-a86f-9ec2a78a959d/volumes" Dec 06 07:37:01 crc kubenswrapper[4895]: I1206 07:37:01.051585 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:37:01 crc kubenswrapper[4895]: E1206 07:37:01.052315 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:37:11 crc kubenswrapper[4895]: I1206 07:37:11.938579 4895 scope.go:117] "RemoveContainer" containerID="6e1556b516a1334be67aa2e2b979bb696a89de71989f4de65e18076edd4df9f5" Dec 06 07:37:15 crc kubenswrapper[4895]: I1206 07:37:15.050895 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:37:15 crc kubenswrapper[4895]: E1206 07:37:15.051143 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:37:26 crc kubenswrapper[4895]: I1206 07:37:26.050606 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:37:26 crc kubenswrapper[4895]: E1206 07:37:26.051208 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:37:39 crc kubenswrapper[4895]: I1206 07:37:39.051305 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:37:39 crc kubenswrapper[4895]: E1206 07:37:39.052196 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.075929 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s8xtq"] Dec 06 07:37:51 crc kubenswrapper[4895]: E1206 07:37:51.076833 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="registry-server" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.076850 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="registry-server" Dec 06 07:37:51 crc kubenswrapper[4895]: E1206 07:37:51.076867 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="extract-content" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.076876 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="extract-content" Dec 06 07:37:51 crc kubenswrapper[4895]: E1206 07:37:51.076906 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="extract-utilities" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.076916 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="extract-utilities" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.077236 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3807546a-8e12-47a9-a86f-9ec2a78a959d" containerName="registry-server" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.078895 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.093023 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8xtq"] Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.142885 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3eb585-aee8-4711-bc51-89b9a358f003-utilities\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.143049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3eb585-aee8-4711-bc51-89b9a358f003-catalog-content\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.143181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrsfj\" (UniqueName: \"kubernetes.io/projected/2e3eb585-aee8-4711-bc51-89b9a358f003-kube-api-access-zrsfj\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.244277 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrsfj\" (UniqueName: \"kubernetes.io/projected/2e3eb585-aee8-4711-bc51-89b9a358f003-kube-api-access-zrsfj\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.244328 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3eb585-aee8-4711-bc51-89b9a358f003-utilities\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.244372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3eb585-aee8-4711-bc51-89b9a358f003-catalog-content\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.244803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3eb585-aee8-4711-bc51-89b9a358f003-catalog-content\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.244824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3eb585-aee8-4711-bc51-89b9a358f003-utilities\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.266251 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrsfj\" (UniqueName: \"kubernetes.io/projected/2e3eb585-aee8-4711-bc51-89b9a358f003-kube-api-access-zrsfj\") pod \"community-operators-s8xtq\" (UID: \"2e3eb585-aee8-4711-bc51-89b9a358f003\") " pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.401517 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:37:51 crc kubenswrapper[4895]: I1206 07:37:51.969708 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8xtq"] Dec 06 07:37:52 crc kubenswrapper[4895]: I1206 07:37:52.539568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8xtq" event={"ID":"2e3eb585-aee8-4711-bc51-89b9a358f003","Type":"ContainerStarted","Data":"2703b5332a53c6ea13d6cea7c81ba1040b875a10144e837583309c122000fe5e"} Dec 06 07:37:53 crc kubenswrapper[4895]: I1206 07:37:53.051093 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:37:53 crc kubenswrapper[4895]: E1206 07:37:53.051369 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:37:54 crc kubenswrapper[4895]: I1206 07:37:54.561375 4895 generic.go:334] "Generic (PLEG): container finished" podID="2e3eb585-aee8-4711-bc51-89b9a358f003" containerID="6a8494e7f3d7f08dd397a670175161dd54b44d36ccfbefec88deeec58f4f7599" exitCode=0 Dec 06 07:37:54 crc kubenswrapper[4895]: I1206 07:37:54.561433 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8xtq" event={"ID":"2e3eb585-aee8-4711-bc51-89b9a358f003","Type":"ContainerDied","Data":"6a8494e7f3d7f08dd397a670175161dd54b44d36ccfbefec88deeec58f4f7599"} Dec 06 07:37:54 crc kubenswrapper[4895]: I1206 07:37:54.564204 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:37:59 crc kubenswrapper[4895]: I1206 07:37:59.613430 4895 generic.go:334] "Generic (PLEG): container finished" podID="2e3eb585-aee8-4711-bc51-89b9a358f003" containerID="c7f1c53d323a1c64d8cca9ae1dbaa43f084f42c89408dc7839592a9cdd3bfa47" exitCode=0 Dec 06 07:37:59 crc kubenswrapper[4895]: I1206 07:37:59.613619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8xtq" event={"ID":"2e3eb585-aee8-4711-bc51-89b9a358f003","Type":"ContainerDied","Data":"c7f1c53d323a1c64d8cca9ae1dbaa43f084f42c89408dc7839592a9cdd3bfa47"} Dec 06 07:38:01 crc kubenswrapper[4895]: I1206 07:38:01.632003 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8xtq" event={"ID":"2e3eb585-aee8-4711-bc51-89b9a358f003","Type":"ContainerStarted","Data":"4c9d636514a946bb1578ab1ac3cfb3a9ddad1337e782ca9057fffc1ac5a362d7"} Dec 06 07:38:01 crc kubenswrapper[4895]: I1206 07:38:01.649992 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s8xtq" podStartSLOduration=4.593091105 podStartE2EDuration="10.649970669s" podCreationTimestamp="2025-12-06 07:37:51 +0000 UTC" firstStartedPulling="2025-12-06 07:37:54.563860913 +0000 UTC m=+2436.965249783" lastFinishedPulling="2025-12-06 07:38:00.620740477 +0000 UTC m=+2443.022129347" observedRunningTime="2025-12-06 07:38:01.647501453 +0000 UTC m=+2444.048890333" watchObservedRunningTime="2025-12-06 07:38:01.649970669 +0000 UTC m=+2444.051359539" Dec 06 07:38:06 crc kubenswrapper[4895]: I1206 07:38:06.051337 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:38:06 crc kubenswrapper[4895]: E1206 07:38:06.052128 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:38:11 crc kubenswrapper[4895]: I1206 07:38:11.402460 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:38:11 crc kubenswrapper[4895]: I1206 07:38:11.403197 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:38:11 crc kubenswrapper[4895]: I1206 07:38:11.454920 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:38:11 crc kubenswrapper[4895]: I1206 07:38:11.761410 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s8xtq" Dec 06 07:38:16 crc kubenswrapper[4895]: I1206 07:38:16.929857 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8xtq"] Dec 06 07:38:17 crc kubenswrapper[4895]: I1206 07:38:17.094992 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpbcj"] Dec 06 07:38:17 crc kubenswrapper[4895]: I1206 07:38:17.095422 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jpbcj" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="registry-server" containerID="cri-o://bd0913d7b2c23e7a24fd476c552b70b498420af6268664562aa1c6ad2be4dc4d" gracePeriod=2 Dec 06 07:38:19 crc kubenswrapper[4895]: I1206 07:38:19.050361 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:38:19 crc kubenswrapper[4895]: E1206 07:38:19.050852 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:38:19 crc kubenswrapper[4895]: I1206 07:38:19.786837 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerID="bd0913d7b2c23e7a24fd476c552b70b498420af6268664562aa1c6ad2be4dc4d" exitCode=0 Dec 06 07:38:19 crc kubenswrapper[4895]: I1206 07:38:19.786874 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpbcj" event={"ID":"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c","Type":"ContainerDied","Data":"bd0913d7b2c23e7a24fd476c552b70b498420af6268664562aa1c6ad2be4dc4d"} Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.210315 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.359874 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-utilities\") pod \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.359920 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lgk6\" (UniqueName: \"kubernetes.io/projected/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-kube-api-access-5lgk6\") pod \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.359951 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-catalog-content\") pod \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\" (UID: \"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c\") " Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.360754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-utilities" (OuterVolumeSpecName: "utilities") pod "0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" (UID: "0f772ae1-b7e4-4762-ad35-7d9ca598ef8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.366649 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-kube-api-access-5lgk6" (OuterVolumeSpecName: "kube-api-access-5lgk6") pod "0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" (UID: "0f772ae1-b7e4-4762-ad35-7d9ca598ef8c"). InnerVolumeSpecName "kube-api-access-5lgk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.408978 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" (UID: "0f772ae1-b7e4-4762-ad35-7d9ca598ef8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.461393 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.461431 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lgk6\" (UniqueName: \"kubernetes.io/projected/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-kube-api-access-5lgk6\") on node \"crc\" DevicePath \"\"" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.461441 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.797830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpbcj" event={"ID":"0f772ae1-b7e4-4762-ad35-7d9ca598ef8c","Type":"ContainerDied","Data":"3c50b62053f3d4ce93650ba16df59f0fb1fbc4e1e5f60bedfa86f4850b77d5c7"} Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.797908 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpbcj" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.797917 4895 scope.go:117] "RemoveContainer" containerID="bd0913d7b2c23e7a24fd476c552b70b498420af6268664562aa1c6ad2be4dc4d" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.827144 4895 scope.go:117] "RemoveContainer" containerID="b59a43890d51bbb13a79625d85bffa65141a6da54a09d800809c06b591aede29" Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.828780 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpbcj"] Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.836554 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jpbcj"] Dec 06 07:38:20 crc kubenswrapper[4895]: I1206 07:38:20.851414 4895 scope.go:117] "RemoveContainer" containerID="e98c307e62a906bb56dd7ecd42415ecd75d3359d629c89824503eca6edb2d087" Dec 06 07:38:22 crc kubenswrapper[4895]: I1206 07:38:22.059543 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" path="/var/lib/kubelet/pods/0f772ae1-b7e4-4762-ad35-7d9ca598ef8c/volumes" Dec 06 07:38:31 crc kubenswrapper[4895]: I1206 07:38:31.050628 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:38:31 crc kubenswrapper[4895]: E1206 07:38:31.051393 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:38:46 crc kubenswrapper[4895]: I1206 07:38:46.050134 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:38:46 crc kubenswrapper[4895]: E1206 07:38:46.050793 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:39:00 crc kubenswrapper[4895]: I1206 07:39:00.051834 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:39:00 crc kubenswrapper[4895]: E1206 07:39:00.052558 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:39:13 crc kubenswrapper[4895]: I1206 07:39:13.051285 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:39:13 crc kubenswrapper[4895]: E1206 07:39:13.051960 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:39:28 crc kubenswrapper[4895]: I1206 07:39:28.059287 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:39:28 crc kubenswrapper[4895]: E1206 07:39:28.060939 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:39:42 crc kubenswrapper[4895]: I1206 07:39:42.051033 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:39:42 crc kubenswrapper[4895]: E1206 07:39:42.051789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:39:53 crc kubenswrapper[4895]: I1206 07:39:53.051043 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:39:53 crc kubenswrapper[4895]: E1206 07:39:53.051846 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:40:06 crc kubenswrapper[4895]: I1206 07:40:06.050179 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:40:06 crc kubenswrapper[4895]: E1206 07:40:06.051075 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:40:18 crc kubenswrapper[4895]: I1206 07:40:18.056948 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:40:18 crc kubenswrapper[4895]: E1206 07:40:18.057854 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:40:30 crc kubenswrapper[4895]: I1206 07:40:30.051097 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:40:30 crc kubenswrapper[4895]: E1206 07:40:30.052288 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.618955 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8plrq"] Dec 06 07:40:31 crc kubenswrapper[4895]: E1206 07:40:31.619747 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="extract-content" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.619769 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="extract-content" Dec 06 07:40:31 crc kubenswrapper[4895]: E1206 07:40:31.619793 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="registry-server" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.619804 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="registry-server" Dec 06 07:40:31 crc kubenswrapper[4895]: E1206 07:40:31.619834 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="extract-utilities" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.619846 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="extract-utilities" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.620116 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f772ae1-b7e4-4762-ad35-7d9ca598ef8c" containerName="registry-server" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.623668 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.644392 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8plrq"] Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.731057 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrlxs\" (UniqueName: \"kubernetes.io/projected/2850f665-4645-4c75-a60f-1aa713872ec7-kube-api-access-lrlxs\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.731109 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-catalog-content\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.731204 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-utilities\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.833122 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-utilities\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.833221 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrlxs\" (UniqueName: \"kubernetes.io/projected/2850f665-4645-4c75-a60f-1aa713872ec7-kube-api-access-lrlxs\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.833263 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-catalog-content\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.833884 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-catalog-content\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.834160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-utilities\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.872570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrlxs\" (UniqueName: \"kubernetes.io/projected/2850f665-4645-4c75-a60f-1aa713872ec7-kube-api-access-lrlxs\") pod \"certified-operators-8plrq\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:31 crc kubenswrapper[4895]: I1206 07:40:31.940999 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:32 crc kubenswrapper[4895]: I1206 07:40:32.442748 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8plrq"] Dec 06 07:40:32 crc kubenswrapper[4895]: I1206 07:40:32.891411 4895 generic.go:334] "Generic (PLEG): container finished" podID="2850f665-4645-4c75-a60f-1aa713872ec7" containerID="90edf943e31efd892245220bcc58f14d3219f2bfee803c5d3ac600fb77a4f733" exitCode=0 Dec 06 07:40:32 crc kubenswrapper[4895]: I1206 07:40:32.891621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerDied","Data":"90edf943e31efd892245220bcc58f14d3219f2bfee803c5d3ac600fb77a4f733"} Dec 06 07:40:32 crc kubenswrapper[4895]: I1206 07:40:32.891698 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerStarted","Data":"3353c4fc35debdeb9e5e4bd69613a03aa06e16b6c78036e73b43b2458d0d244e"} Dec 06 07:40:33 crc kubenswrapper[4895]: I1206 07:40:33.902502 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerStarted","Data":"2fe831527e99dafc8b930cca0546cea620f470d404fd7d54a9b1a914dc25b0b2"} Dec 06 07:40:34 crc kubenswrapper[4895]: I1206 07:40:34.911041 4895 generic.go:334] "Generic (PLEG): container finished" podID="2850f665-4645-4c75-a60f-1aa713872ec7" containerID="2fe831527e99dafc8b930cca0546cea620f470d404fd7d54a9b1a914dc25b0b2" exitCode=0 Dec 06 07:40:34 crc kubenswrapper[4895]: I1206 07:40:34.911101 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerDied","Data":"2fe831527e99dafc8b930cca0546cea620f470d404fd7d54a9b1a914dc25b0b2"} Dec 06 07:40:35 crc kubenswrapper[4895]: I1206 07:40:35.920643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerStarted","Data":"90d27b56e1eff151bbcac300dba76886f196d02e3b2dcd253f5df1a894efa249"} Dec 06 07:40:35 crc kubenswrapper[4895]: I1206 07:40:35.940857 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8plrq" podStartSLOduration=2.376925103 podStartE2EDuration="4.940838771s" podCreationTimestamp="2025-12-06 07:40:31 +0000 UTC" firstStartedPulling="2025-12-06 07:40:32.893422868 +0000 UTC m=+2595.294811738" lastFinishedPulling="2025-12-06 07:40:35.457336546 +0000 UTC m=+2597.858725406" observedRunningTime="2025-12-06 07:40:35.938142908 +0000 UTC m=+2598.339531788" watchObservedRunningTime="2025-12-06 07:40:35.940838771 +0000 UTC m=+2598.342227641" Dec 06 07:40:41 crc kubenswrapper[4895]: I1206 07:40:41.941840 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:41 crc kubenswrapper[4895]: I1206 07:40:41.942551 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:41 crc kubenswrapper[4895]: I1206 07:40:41.999853 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:42 crc kubenswrapper[4895]: I1206 07:40:42.046895 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:42 crc kubenswrapper[4895]: I1206 07:40:42.244605 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8plrq"] Dec 06 07:40:44 crc kubenswrapper[4895]: I1206 07:40:43.999943 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8plrq" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="registry-server" containerID="cri-o://90d27b56e1eff151bbcac300dba76886f196d02e3b2dcd253f5df1a894efa249" gracePeriod=2 Dec 06 07:40:45 crc kubenswrapper[4895]: I1206 07:40:45.050524 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.031129 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee"} Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.034653 4895 generic.go:334] "Generic (PLEG): container finished" podID="2850f665-4645-4c75-a60f-1aa713872ec7" containerID="90d27b56e1eff151bbcac300dba76886f196d02e3b2dcd253f5df1a894efa249" exitCode=0 Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.034724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerDied","Data":"90d27b56e1eff151bbcac300dba76886f196d02e3b2dcd253f5df1a894efa249"} Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.188772 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.287970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-catalog-content\") pod \"2850f665-4645-4c75-a60f-1aa713872ec7\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.288066 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-utilities\") pod \"2850f665-4645-4c75-a60f-1aa713872ec7\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.288229 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrlxs\" (UniqueName: \"kubernetes.io/projected/2850f665-4645-4c75-a60f-1aa713872ec7-kube-api-access-lrlxs\") pod \"2850f665-4645-4c75-a60f-1aa713872ec7\" (UID: \"2850f665-4645-4c75-a60f-1aa713872ec7\") " Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.290289 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-utilities" (OuterVolumeSpecName: "utilities") pod "2850f665-4645-4c75-a60f-1aa713872ec7" (UID: "2850f665-4645-4c75-a60f-1aa713872ec7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.293607 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2850f665-4645-4c75-a60f-1aa713872ec7-kube-api-access-lrlxs" (OuterVolumeSpecName: "kube-api-access-lrlxs") pod "2850f665-4645-4c75-a60f-1aa713872ec7" (UID: "2850f665-4645-4c75-a60f-1aa713872ec7"). InnerVolumeSpecName "kube-api-access-lrlxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.353578 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2850f665-4645-4c75-a60f-1aa713872ec7" (UID: "2850f665-4645-4c75-a60f-1aa713872ec7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.389947 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrlxs\" (UniqueName: \"kubernetes.io/projected/2850f665-4645-4c75-a60f-1aa713872ec7-kube-api-access-lrlxs\") on node \"crc\" DevicePath \"\"" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.390239 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:40:47 crc kubenswrapper[4895]: I1206 07:40:47.390331 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2850f665-4645-4c75-a60f-1aa713872ec7-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.045044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8plrq" event={"ID":"2850f665-4645-4c75-a60f-1aa713872ec7","Type":"ContainerDied","Data":"3353c4fc35debdeb9e5e4bd69613a03aa06e16b6c78036e73b43b2458d0d244e"} Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.045134 4895 scope.go:117] "RemoveContainer" containerID="90d27b56e1eff151bbcac300dba76886f196d02e3b2dcd253f5df1a894efa249" Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.045120 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8plrq" Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.078857 4895 scope.go:117] "RemoveContainer" containerID="2fe831527e99dafc8b930cca0546cea620f470d404fd7d54a9b1a914dc25b0b2" Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.088931 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8plrq"] Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.094430 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8plrq"] Dec 06 07:40:48 crc kubenswrapper[4895]: I1206 07:40:48.117999 4895 scope.go:117] "RemoveContainer" containerID="90edf943e31efd892245220bcc58f14d3219f2bfee803c5d3ac600fb77a4f733" Dec 06 07:40:50 crc kubenswrapper[4895]: I1206 07:40:50.065607 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" path="/var/lib/kubelet/pods/2850f665-4645-4c75-a60f-1aa713872ec7/volumes" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.223718 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2g8pz"] Dec 06 07:42:58 crc kubenswrapper[4895]: E1206 07:42:58.225694 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="extract-content" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.225794 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="extract-content" Dec 06 07:42:58 crc kubenswrapper[4895]: E1206 07:42:58.225866 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="extract-utilities" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.225933 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="extract-utilities" Dec 06 07:42:58 crc kubenswrapper[4895]: E1206 07:42:58.226012 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="registry-server" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.226068 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="registry-server" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.226270 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2850f665-4645-4c75-a60f-1aa713872ec7" containerName="registry-server" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.227351 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.236945 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2g8pz"] Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.254954 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-catalog-content\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.255022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-utilities\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.255157 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjr2\" (UniqueName: \"kubernetes.io/projected/202eee7d-442a-40bc-b23f-c58e09d72438-kube-api-access-btjr2\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.356189 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-utilities\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.356494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btjr2\" (UniqueName: \"kubernetes.io/projected/202eee7d-442a-40bc-b23f-c58e09d72438-kube-api-access-btjr2\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.356609 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-catalog-content\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.357148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-catalog-content\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.357356 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-utilities\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.385362 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btjr2\" (UniqueName: \"kubernetes.io/projected/202eee7d-442a-40bc-b23f-c58e09d72438-kube-api-access-btjr2\") pod \"redhat-marketplace-2g8pz\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:58 crc kubenswrapper[4895]: I1206 07:42:58.546652 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:42:59 crc kubenswrapper[4895]: I1206 07:42:59.041916 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2g8pz"] Dec 06 07:42:59 crc kubenswrapper[4895]: W1206 07:42:59.047702 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod202eee7d_442a_40bc_b23f_c58e09d72438.slice/crio-360cefd73f147cac887fb233f5dc97cda4d7716b66b4cf00ddcf7a06250015ba WatchSource:0}: Error finding container 360cefd73f147cac887fb233f5dc97cda4d7716b66b4cf00ddcf7a06250015ba: Status 404 returned error can't find the container with id 360cefd73f147cac887fb233f5dc97cda4d7716b66b4cf00ddcf7a06250015ba Dec 06 07:42:59 crc kubenswrapper[4895]: I1206 07:42:59.164337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerStarted","Data":"360cefd73f147cac887fb233f5dc97cda4d7716b66b4cf00ddcf7a06250015ba"} Dec 06 07:42:59 crc kubenswrapper[4895]: I1206 07:42:59.696130 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:42:59 crc kubenswrapper[4895]: I1206 07:42:59.696209 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:43:01 crc kubenswrapper[4895]: I1206 07:43:01.183111 4895 generic.go:334] "Generic (PLEG): container finished" podID="202eee7d-442a-40bc-b23f-c58e09d72438" containerID="1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c" exitCode=0 Dec 06 07:43:01 crc kubenswrapper[4895]: I1206 07:43:01.183270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerDied","Data":"1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c"} Dec 06 07:43:01 crc kubenswrapper[4895]: I1206 07:43:01.185197 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:43:02 crc kubenswrapper[4895]: I1206 07:43:02.194124 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerStarted","Data":"00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d"} Dec 06 07:43:03 crc kubenswrapper[4895]: I1206 07:43:03.201922 4895 generic.go:334] "Generic (PLEG): container finished" podID="202eee7d-442a-40bc-b23f-c58e09d72438" containerID="00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d" exitCode=0 Dec 06 07:43:03 crc kubenswrapper[4895]: I1206 07:43:03.201967 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerDied","Data":"00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d"} Dec 06 07:43:06 crc kubenswrapper[4895]: I1206 07:43:06.222604 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerStarted","Data":"74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18"} Dec 06 07:43:06 crc kubenswrapper[4895]: I1206 07:43:06.493234 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2g8pz" podStartSLOduration=4.441363114 podStartE2EDuration="8.493196782s" podCreationTimestamp="2025-12-06 07:42:58 +0000 UTC" firstStartedPulling="2025-12-06 07:43:01.18486632 +0000 UTC m=+2743.586255190" lastFinishedPulling="2025-12-06 07:43:05.236699988 +0000 UTC m=+2747.638088858" observedRunningTime="2025-12-06 07:43:06.487138259 +0000 UTC m=+2748.888527149" watchObservedRunningTime="2025-12-06 07:43:06.493196782 +0000 UTC m=+2748.894585662" Dec 06 07:43:08 crc kubenswrapper[4895]: I1206 07:43:08.546942 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:43:08 crc kubenswrapper[4895]: I1206 07:43:08.547014 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:43:08 crc kubenswrapper[4895]: I1206 07:43:08.647652 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:43:18 crc kubenswrapper[4895]: I1206 07:43:18.596254 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:43:18 crc kubenswrapper[4895]: I1206 07:43:18.657949 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2g8pz"] Dec 06 07:43:19 crc kubenswrapper[4895]: I1206 07:43:19.323027 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2g8pz" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="registry-server" containerID="cri-o://74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18" gracePeriod=2 Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.252489 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.324009 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-catalog-content\") pod \"202eee7d-442a-40bc-b23f-c58e09d72438\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.324121 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btjr2\" (UniqueName: \"kubernetes.io/projected/202eee7d-442a-40bc-b23f-c58e09d72438-kube-api-access-btjr2\") pod \"202eee7d-442a-40bc-b23f-c58e09d72438\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.324194 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-utilities\") pod \"202eee7d-442a-40bc-b23f-c58e09d72438\" (UID: \"202eee7d-442a-40bc-b23f-c58e09d72438\") " Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.326155 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-utilities" (OuterVolumeSpecName: "utilities") pod "202eee7d-442a-40bc-b23f-c58e09d72438" (UID: "202eee7d-442a-40bc-b23f-c58e09d72438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.334547 4895 generic.go:334] "Generic (PLEG): container finished" podID="202eee7d-442a-40bc-b23f-c58e09d72438" containerID="74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18" exitCode=0 Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.334597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerDied","Data":"74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18"} Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.334624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2g8pz" event={"ID":"202eee7d-442a-40bc-b23f-c58e09d72438","Type":"ContainerDied","Data":"360cefd73f147cac887fb233f5dc97cda4d7716b66b4cf00ddcf7a06250015ba"} Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.334644 4895 scope.go:117] "RemoveContainer" containerID="74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.335034 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2g8pz" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.336782 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202eee7d-442a-40bc-b23f-c58e09d72438-kube-api-access-btjr2" (OuterVolumeSpecName: "kube-api-access-btjr2") pod "202eee7d-442a-40bc-b23f-c58e09d72438" (UID: "202eee7d-442a-40bc-b23f-c58e09d72438"). InnerVolumeSpecName "kube-api-access-btjr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.346316 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "202eee7d-442a-40bc-b23f-c58e09d72438" (UID: "202eee7d-442a-40bc-b23f-c58e09d72438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.386450 4895 scope.go:117] "RemoveContainer" containerID="00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.406281 4895 scope.go:117] "RemoveContainer" containerID="1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.425779 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.425825 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btjr2\" (UniqueName: \"kubernetes.io/projected/202eee7d-442a-40bc-b23f-c58e09d72438-kube-api-access-btjr2\") on node \"crc\" DevicePath \"\"" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.425841 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202eee7d-442a-40bc-b23f-c58e09d72438-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.437559 4895 scope.go:117] "RemoveContainer" containerID="74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18" Dec 06 07:43:20 crc kubenswrapper[4895]: E1206 07:43:20.438181 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18\": container with ID starting with 74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18 not found: ID does not exist" containerID="74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.438262 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18"} err="failed to get container status \"74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18\": rpc error: code = NotFound desc = could not find container \"74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18\": container with ID starting with 74c7e4b28a555dc2cda85b95296201b488371b8bc30d6445b8360a97789dad18 not found: ID does not exist" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.438302 4895 scope.go:117] "RemoveContainer" containerID="00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d" Dec 06 07:43:20 crc kubenswrapper[4895]: E1206 07:43:20.438751 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d\": container with ID starting with 00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d not found: ID does not exist" containerID="00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.438787 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d"} err="failed to get container status \"00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d\": rpc error: code = NotFound desc = could not find container \"00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d\": container with ID starting with 00be0792cadd18051427916f7fb4d1ec3f09b8fc792e823b8b7f9775986e544d not found: ID does not exist" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.438814 4895 scope.go:117] "RemoveContainer" containerID="1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c" Dec 06 07:43:20 crc kubenswrapper[4895]: E1206 07:43:20.439046 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c\": container with ID starting with 1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c not found: ID does not exist" containerID="1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.439074 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c"} err="failed to get container status \"1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c\": rpc error: code = NotFound desc = could not find container \"1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c\": container with ID starting with 1b020fc9c05cd8b6534a3cb5cd1ba351bceca327505734b1baa34fb0667c4c8c not found: ID does not exist" Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.682307 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2g8pz"] Dec 06 07:43:20 crc kubenswrapper[4895]: I1206 07:43:20.690800 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2g8pz"] Dec 06 07:43:22 crc kubenswrapper[4895]: I1206 07:43:22.079355 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" path="/var/lib/kubelet/pods/202eee7d-442a-40bc-b23f-c58e09d72438/volumes" Dec 06 07:43:29 crc kubenswrapper[4895]: I1206 07:43:29.696169 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:43:29 crc kubenswrapper[4895]: I1206 07:43:29.696832 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:43:59 crc kubenswrapper[4895]: I1206 07:43:59.696154 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:43:59 crc kubenswrapper[4895]: I1206 07:43:59.696692 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:43:59 crc kubenswrapper[4895]: I1206 07:43:59.696744 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:43:59 crc kubenswrapper[4895]: I1206 07:43:59.697353 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:43:59 crc kubenswrapper[4895]: I1206 07:43:59.697417 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee" gracePeriod=600 Dec 06 07:43:59 crc kubenswrapper[4895]: E1206 07:43:59.879218 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9200f6d1_bc88_4065_9985_8c6a6387404f.slice/crio-c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:44:01 crc kubenswrapper[4895]: I1206 07:44:01.671561 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee" exitCode=0 Dec 06 07:44:01 crc kubenswrapper[4895]: I1206 07:44:01.671609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee"} Dec 06 07:44:01 crc kubenswrapper[4895]: I1206 07:44:01.673110 4895 scope.go:117] "RemoveContainer" containerID="aecbeb3e6f0a6f97a5bf4592d1bc16719ef07b5e9642ecb29c901d07621e9d02" Dec 06 07:44:02 crc kubenswrapper[4895]: I1206 07:44:02.681843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212"} Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.149911 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8"] Dec 06 07:45:00 crc kubenswrapper[4895]: E1206 07:45:00.150723 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="registry-server" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.150738 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="registry-server" Dec 06 07:45:00 crc kubenswrapper[4895]: E1206 07:45:00.150751 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="extract-content" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.150758 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="extract-content" Dec 06 07:45:00 crc kubenswrapper[4895]: E1206 07:45:00.150780 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="extract-utilities" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.150785 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="extract-utilities" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.150950 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="202eee7d-442a-40bc-b23f-c58e09d72438" containerName="registry-server" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.151438 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.156233 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.156439 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.161281 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8"] Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.352406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqgh9\" (UniqueName: \"kubernetes.io/projected/86538cf2-c14f-4b70-9e3f-da1ea7c31973-kube-api-access-lqgh9\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.352463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86538cf2-c14f-4b70-9e3f-da1ea7c31973-secret-volume\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.352555 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86538cf2-c14f-4b70-9e3f-da1ea7c31973-config-volume\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.454083 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86538cf2-c14f-4b70-9e3f-da1ea7c31973-config-volume\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.454501 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqgh9\" (UniqueName: \"kubernetes.io/projected/86538cf2-c14f-4b70-9e3f-da1ea7c31973-kube-api-access-lqgh9\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.454611 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86538cf2-c14f-4b70-9e3f-da1ea7c31973-secret-volume\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.455410 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86538cf2-c14f-4b70-9e3f-da1ea7c31973-config-volume\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.462734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86538cf2-c14f-4b70-9e3f-da1ea7c31973-secret-volume\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.475350 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqgh9\" (UniqueName: \"kubernetes.io/projected/86538cf2-c14f-4b70-9e3f-da1ea7c31973-kube-api-access-lqgh9\") pod \"collect-profiles-29416785-z4mb8\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:00 crc kubenswrapper[4895]: I1206 07:45:00.773926 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:01 crc kubenswrapper[4895]: I1206 07:45:01.184419 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8"] Dec 06 07:45:02 crc kubenswrapper[4895]: I1206 07:45:02.166305 4895 generic.go:334] "Generic (PLEG): container finished" podID="86538cf2-c14f-4b70-9e3f-da1ea7c31973" containerID="35101e5e13f77236bac4c32d0a4a241e3986abc588821873c071cde69d99c654" exitCode=0 Dec 06 07:45:02 crc kubenswrapper[4895]: I1206 07:45:02.166366 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" event={"ID":"86538cf2-c14f-4b70-9e3f-da1ea7c31973","Type":"ContainerDied","Data":"35101e5e13f77236bac4c32d0a4a241e3986abc588821873c071cde69d99c654"} Dec 06 07:45:02 crc kubenswrapper[4895]: I1206 07:45:02.166709 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" event={"ID":"86538cf2-c14f-4b70-9e3f-da1ea7c31973","Type":"ContainerStarted","Data":"9d69881dd817ca49f3c52ec3853aa79e1d85458c6dbcf7480448a258a405a7f7"} Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.429964 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.615363 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86538cf2-c14f-4b70-9e3f-da1ea7c31973-config-volume\") pod \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.615420 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86538cf2-c14f-4b70-9e3f-da1ea7c31973-secret-volume\") pod \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.615569 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqgh9\" (UniqueName: \"kubernetes.io/projected/86538cf2-c14f-4b70-9e3f-da1ea7c31973-kube-api-access-lqgh9\") pod \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\" (UID: \"86538cf2-c14f-4b70-9e3f-da1ea7c31973\") " Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.616271 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86538cf2-c14f-4b70-9e3f-da1ea7c31973-config-volume" (OuterVolumeSpecName: "config-volume") pod "86538cf2-c14f-4b70-9e3f-da1ea7c31973" (UID: "86538cf2-c14f-4b70-9e3f-da1ea7c31973"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.621437 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86538cf2-c14f-4b70-9e3f-da1ea7c31973-kube-api-access-lqgh9" (OuterVolumeSpecName: "kube-api-access-lqgh9") pod "86538cf2-c14f-4b70-9e3f-da1ea7c31973" (UID: "86538cf2-c14f-4b70-9e3f-da1ea7c31973"). InnerVolumeSpecName "kube-api-access-lqgh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.621506 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86538cf2-c14f-4b70-9e3f-da1ea7c31973-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86538cf2-c14f-4b70-9e3f-da1ea7c31973" (UID: "86538cf2-c14f-4b70-9e3f-da1ea7c31973"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.717593 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86538cf2-c14f-4b70-9e3f-da1ea7c31973-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.717650 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86538cf2-c14f-4b70-9e3f-da1ea7c31973-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:45:03 crc kubenswrapper[4895]: I1206 07:45:03.717664 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqgh9\" (UniqueName: \"kubernetes.io/projected/86538cf2-c14f-4b70-9e3f-da1ea7c31973-kube-api-access-lqgh9\") on node \"crc\" DevicePath \"\"" Dec 06 07:45:04 crc kubenswrapper[4895]: I1206 07:45:04.184881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" event={"ID":"86538cf2-c14f-4b70-9e3f-da1ea7c31973","Type":"ContainerDied","Data":"9d69881dd817ca49f3c52ec3853aa79e1d85458c6dbcf7480448a258a405a7f7"} Dec 06 07:45:04 crc kubenswrapper[4895]: I1206 07:45:04.184920 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d69881dd817ca49f3c52ec3853aa79e1d85458c6dbcf7480448a258a405a7f7" Dec 06 07:45:04 crc kubenswrapper[4895]: I1206 07:45:04.184936 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8" Dec 06 07:45:04 crc kubenswrapper[4895]: I1206 07:45:04.507136 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q"] Dec 06 07:45:04 crc kubenswrapper[4895]: I1206 07:45:04.512153 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-nsn6q"] Dec 06 07:45:06 crc kubenswrapper[4895]: I1206 07:45:06.062905 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651dca05-74b9-4632-8e1f-6ceb648d2b23" path="/var/lib/kubelet/pods/651dca05-74b9-4632-8e1f-6ceb648d2b23/volumes" Dec 06 07:45:12 crc kubenswrapper[4895]: I1206 07:45:12.167185 4895 scope.go:117] "RemoveContainer" containerID="d4ad5fe27a9a153b6e7e2252e9bfd84e15490a5eeb914fda265608901b678a5a" Dec 06 07:46:29 crc kubenswrapper[4895]: I1206 07:46:29.695835 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:46:29 crc kubenswrapper[4895]: I1206 07:46:29.696434 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:46:59 crc kubenswrapper[4895]: I1206 07:46:59.695461 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:46:59 crc kubenswrapper[4895]: I1206 07:46:59.696101 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:47:29 crc kubenswrapper[4895]: I1206 07:47:29.695574 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:47:29 crc kubenswrapper[4895]: I1206 07:47:29.696220 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:47:29 crc kubenswrapper[4895]: I1206 07:47:29.696277 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:47:29 crc kubenswrapper[4895]: I1206 07:47:29.696915 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:47:29 crc kubenswrapper[4895]: I1206 07:47:29.696983 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" gracePeriod=600 Dec 06 07:47:29 crc kubenswrapper[4895]: E1206 07:47:29.833436 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:47:30 crc kubenswrapper[4895]: I1206 07:47:30.555834 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" exitCode=0 Dec 06 07:47:30 crc kubenswrapper[4895]: I1206 07:47:30.555930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212"} Dec 06 07:47:30 crc kubenswrapper[4895]: I1206 07:47:30.556219 4895 scope.go:117] "RemoveContainer" containerID="c2b6e6c75d1d84ff08cb8413d94c82b0c80969613bd7308732f7efcd65804eee" Dec 06 07:47:30 crc kubenswrapper[4895]: I1206 07:47:30.557062 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:47:30 crc kubenswrapper[4895]: E1206 07:47:30.557721 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.126146 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2qck"] Dec 06 07:47:35 crc kubenswrapper[4895]: E1206 07:47:35.127079 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86538cf2-c14f-4b70-9e3f-da1ea7c31973" containerName="collect-profiles" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.127093 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86538cf2-c14f-4b70-9e3f-da1ea7c31973" containerName="collect-profiles" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.127225 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86538cf2-c14f-4b70-9e3f-da1ea7c31973" containerName="collect-profiles" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.128528 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.136650 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2qck"] Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.264549 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-utilities\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.264632 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjkr\" (UniqueName: \"kubernetes.io/projected/c9b83364-19bb-4925-9187-20b492bc81d3-kube-api-access-snjkr\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.264709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-catalog-content\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.366530 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-utilities\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.366588 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snjkr\" (UniqueName: \"kubernetes.io/projected/c9b83364-19bb-4925-9187-20b492bc81d3-kube-api-access-snjkr\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.366677 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-catalog-content\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.367238 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-utilities\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.367277 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-catalog-content\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.385989 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snjkr\" (UniqueName: \"kubernetes.io/projected/c9b83364-19bb-4925-9187-20b492bc81d3-kube-api-access-snjkr\") pod \"redhat-operators-q2qck\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.446747 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:35 crc kubenswrapper[4895]: I1206 07:47:35.872015 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2qck"] Dec 06 07:47:36 crc kubenswrapper[4895]: I1206 07:47:36.605298 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9b83364-19bb-4925-9187-20b492bc81d3" containerID="9d428d30b8eac09b2c7eeea3dfb7d5461005118a3f6a28ba66a1835305431186" exitCode=0 Dec 06 07:47:36 crc kubenswrapper[4895]: I1206 07:47:36.605354 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerDied","Data":"9d428d30b8eac09b2c7eeea3dfb7d5461005118a3f6a28ba66a1835305431186"} Dec 06 07:47:36 crc kubenswrapper[4895]: I1206 07:47:36.605379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerStarted","Data":"b76b7c44a3a53171684e36307f123bde8e0fd57c56c004563fd755bf2de03cb8"} Dec 06 07:47:37 crc kubenswrapper[4895]: I1206 07:47:37.614002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerStarted","Data":"7c4c001a8fa49f16d60a084ca31723f84d5b287086890d19df1e3f3677dab547"} Dec 06 07:47:38 crc kubenswrapper[4895]: I1206 07:47:38.624500 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9b83364-19bb-4925-9187-20b492bc81d3" containerID="7c4c001a8fa49f16d60a084ca31723f84d5b287086890d19df1e3f3677dab547" exitCode=0 Dec 06 07:47:38 crc kubenswrapper[4895]: I1206 07:47:38.624554 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerDied","Data":"7c4c001a8fa49f16d60a084ca31723f84d5b287086890d19df1e3f3677dab547"} Dec 06 07:47:39 crc kubenswrapper[4895]: I1206 07:47:39.635363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerStarted","Data":"9246423e8efdbc2db5e043117b394c439c3dfec370693b8f7813a13ed64f55d2"} Dec 06 07:47:39 crc kubenswrapper[4895]: I1206 07:47:39.660202 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2qck" podStartSLOduration=2.187443696 podStartE2EDuration="4.660184507s" podCreationTimestamp="2025-12-06 07:47:35 +0000 UTC" firstStartedPulling="2025-12-06 07:47:36.606876501 +0000 UTC m=+3019.008265371" lastFinishedPulling="2025-12-06 07:47:39.079617292 +0000 UTC m=+3021.481006182" observedRunningTime="2025-12-06 07:47:39.65538258 +0000 UTC m=+3022.056771450" watchObservedRunningTime="2025-12-06 07:47:39.660184507 +0000 UTC m=+3022.061573377" Dec 06 07:47:43 crc kubenswrapper[4895]: I1206 07:47:43.051122 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:47:43 crc kubenswrapper[4895]: E1206 07:47:43.051633 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:47:45 crc kubenswrapper[4895]: I1206 07:47:45.447118 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:45 crc kubenswrapper[4895]: I1206 07:47:45.447581 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:45 crc kubenswrapper[4895]: I1206 07:47:45.516683 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:45 crc kubenswrapper[4895]: I1206 07:47:45.732986 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:45 crc kubenswrapper[4895]: I1206 07:47:45.784266 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2qck"] Dec 06 07:47:47 crc kubenswrapper[4895]: I1206 07:47:47.695802 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2qck" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="registry-server" containerID="cri-o://9246423e8efdbc2db5e043117b394c439c3dfec370693b8f7813a13ed64f55d2" gracePeriod=2 Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.740721 4895 generic.go:334] "Generic (PLEG): container finished" podID="c9b83364-19bb-4925-9187-20b492bc81d3" containerID="9246423e8efdbc2db5e043117b394c439c3dfec370693b8f7813a13ed64f55d2" exitCode=0 Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.740795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerDied","Data":"9246423e8efdbc2db5e043117b394c439c3dfec370693b8f7813a13ed64f55d2"} Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.812258 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.887425 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-utilities\") pod \"c9b83364-19bb-4925-9187-20b492bc81d3\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.887583 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-catalog-content\") pod \"c9b83364-19bb-4925-9187-20b492bc81d3\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.887635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snjkr\" (UniqueName: \"kubernetes.io/projected/c9b83364-19bb-4925-9187-20b492bc81d3-kube-api-access-snjkr\") pod \"c9b83364-19bb-4925-9187-20b492bc81d3\" (UID: \"c9b83364-19bb-4925-9187-20b492bc81d3\") " Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.888271 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-utilities" (OuterVolumeSpecName: "utilities") pod "c9b83364-19bb-4925-9187-20b492bc81d3" (UID: "c9b83364-19bb-4925-9187-20b492bc81d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.892128 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b83364-19bb-4925-9187-20b492bc81d3-kube-api-access-snjkr" (OuterVolumeSpecName: "kube-api-access-snjkr") pod "c9b83364-19bb-4925-9187-20b492bc81d3" (UID: "c9b83364-19bb-4925-9187-20b492bc81d3"). InnerVolumeSpecName "kube-api-access-snjkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.989156 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:47:50 crc kubenswrapper[4895]: I1206 07:47:50.989194 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snjkr\" (UniqueName: \"kubernetes.io/projected/c9b83364-19bb-4925-9187-20b492bc81d3-kube-api-access-snjkr\") on node \"crc\" DevicePath \"\"" Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.009979 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9b83364-19bb-4925-9187-20b492bc81d3" (UID: "c9b83364-19bb-4925-9187-20b492bc81d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.103871 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b83364-19bb-4925-9187-20b492bc81d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.749336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qck" event={"ID":"c9b83364-19bb-4925-9187-20b492bc81d3","Type":"ContainerDied","Data":"b76b7c44a3a53171684e36307f123bde8e0fd57c56c004563fd755bf2de03cb8"} Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.749386 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qck" Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.749426 4895 scope.go:117] "RemoveContainer" containerID="9246423e8efdbc2db5e043117b394c439c3dfec370693b8f7813a13ed64f55d2" Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.789605 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2qck"] Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.789821 4895 scope.go:117] "RemoveContainer" containerID="7c4c001a8fa49f16d60a084ca31723f84d5b287086890d19df1e3f3677dab547" Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.801261 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2qck"] Dec 06 07:47:51 crc kubenswrapper[4895]: I1206 07:47:51.818903 4895 scope.go:117] "RemoveContainer" containerID="9d428d30b8eac09b2c7eeea3dfb7d5461005118a3f6a28ba66a1835305431186" Dec 06 07:47:52 crc kubenswrapper[4895]: I1206 07:47:52.060653 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" path="/var/lib/kubelet/pods/c9b83364-19bb-4925-9187-20b492bc81d3/volumes" Dec 06 07:47:55 crc kubenswrapper[4895]: I1206 07:47:55.050102 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:47:55 crc kubenswrapper[4895]: E1206 07:47:55.050536 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:48:07 crc kubenswrapper[4895]: I1206 07:48:07.050808 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:48:07 crc kubenswrapper[4895]: E1206 07:48:07.051832 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:48:20 crc kubenswrapper[4895]: I1206 07:48:20.052834 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:48:20 crc kubenswrapper[4895]: E1206 07:48:20.054925 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:48:35 crc kubenswrapper[4895]: I1206 07:48:35.050310 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:48:35 crc kubenswrapper[4895]: E1206 07:48:35.051135 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:48:50 crc kubenswrapper[4895]: I1206 07:48:50.050996 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:48:50 crc kubenswrapper[4895]: E1206 07:48:50.051976 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:49:03 crc kubenswrapper[4895]: I1206 07:49:03.050939 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:49:03 crc kubenswrapper[4895]: E1206 07:49:03.051882 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:49:15 crc kubenswrapper[4895]: I1206 07:49:15.050507 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:49:15 crc kubenswrapper[4895]: E1206 07:49:15.051217 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:49:28 crc kubenswrapper[4895]: I1206 07:49:28.056143 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:49:28 crc kubenswrapper[4895]: E1206 07:49:28.056933 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:49:39 crc kubenswrapper[4895]: I1206 07:49:39.050805 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:49:39 crc kubenswrapper[4895]: E1206 07:49:39.052014 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:49:52 crc kubenswrapper[4895]: I1206 07:49:52.053643 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:49:52 crc kubenswrapper[4895]: E1206 07:49:52.054404 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:50:06 crc kubenswrapper[4895]: I1206 07:50:06.052201 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:50:06 crc kubenswrapper[4895]: E1206 07:50:06.053064 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:50:21 crc kubenswrapper[4895]: I1206 07:50:21.050623 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:50:21 crc kubenswrapper[4895]: E1206 07:50:21.051499 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:50:33 crc kubenswrapper[4895]: I1206 07:50:33.051880 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:50:33 crc kubenswrapper[4895]: E1206 07:50:33.053007 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:50:44 crc kubenswrapper[4895]: I1206 07:50:44.051655 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:50:44 crc kubenswrapper[4895]: E1206 07:50:44.053119 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:50:59 crc kubenswrapper[4895]: I1206 07:50:59.051294 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:50:59 crc kubenswrapper[4895]: E1206 07:50:59.052361 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:51:13 crc kubenswrapper[4895]: I1206 07:51:13.051526 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:51:13 crc kubenswrapper[4895]: E1206 07:51:13.052296 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.413214 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bqdt2"] Dec 06 07:51:24 crc kubenswrapper[4895]: E1206 07:51:24.414184 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="extract-utilities" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.414202 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="extract-utilities" Dec 06 07:51:24 crc kubenswrapper[4895]: E1206 07:51:24.414225 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="registry-server" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.414232 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="registry-server" Dec 06 07:51:24 crc kubenswrapper[4895]: E1206 07:51:24.414245 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="extract-content" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.414251 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="extract-content" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.414453 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b83364-19bb-4925-9187-20b492bc81d3" containerName="registry-server" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.417626 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.427521 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bqdt2"] Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.569680 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-catalog-content\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.570047 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-utilities\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.570190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w5w9\" (UniqueName: \"kubernetes.io/projected/d128803f-6796-44a4-a889-8fc23c60aa56-kube-api-access-6w5w9\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.671787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-catalog-content\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.671844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-utilities\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.671878 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w5w9\" (UniqueName: \"kubernetes.io/projected/d128803f-6796-44a4-a889-8fc23c60aa56-kube-api-access-6w5w9\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.672352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-catalog-content\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.672384 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-utilities\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.691622 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w5w9\" (UniqueName: \"kubernetes.io/projected/d128803f-6796-44a4-a889-8fc23c60aa56-kube-api-access-6w5w9\") pod \"certified-operators-bqdt2\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:24 crc kubenswrapper[4895]: I1206 07:51:24.748147 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:25 crc kubenswrapper[4895]: I1206 07:51:25.050498 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:51:25 crc kubenswrapper[4895]: E1206 07:51:25.051081 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:51:25 crc kubenswrapper[4895]: I1206 07:51:25.268993 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bqdt2"] Dec 06 07:51:26 crc kubenswrapper[4895]: I1206 07:51:26.206877 4895 generic.go:334] "Generic (PLEG): container finished" podID="d128803f-6796-44a4-a889-8fc23c60aa56" containerID="3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5" exitCode=0 Dec 06 07:51:26 crc kubenswrapper[4895]: I1206 07:51:26.207011 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerDied","Data":"3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5"} Dec 06 07:51:26 crc kubenswrapper[4895]: I1206 07:51:26.207753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerStarted","Data":"68189c6ffdaf16bd541f7eb64bd4b2698597c842cb67e71378232a4f9bcaf9db"} Dec 06 07:51:26 crc kubenswrapper[4895]: I1206 07:51:26.214432 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:51:27 crc kubenswrapper[4895]: I1206 07:51:27.226108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerStarted","Data":"cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1"} Dec 06 07:51:28 crc kubenswrapper[4895]: I1206 07:51:28.236924 4895 generic.go:334] "Generic (PLEG): container finished" podID="d128803f-6796-44a4-a889-8fc23c60aa56" containerID="cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1" exitCode=0 Dec 06 07:51:28 crc kubenswrapper[4895]: I1206 07:51:28.237022 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerDied","Data":"cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1"} Dec 06 07:51:29 crc kubenswrapper[4895]: I1206 07:51:29.250611 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerStarted","Data":"f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad"} Dec 06 07:51:29 crc kubenswrapper[4895]: I1206 07:51:29.271902 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bqdt2" podStartSLOduration=2.864649081 podStartE2EDuration="5.271876257s" podCreationTimestamp="2025-12-06 07:51:24 +0000 UTC" firstStartedPulling="2025-12-06 07:51:26.212712064 +0000 UTC m=+3248.614100934" lastFinishedPulling="2025-12-06 07:51:28.61993924 +0000 UTC m=+3251.021328110" observedRunningTime="2025-12-06 07:51:29.267080058 +0000 UTC m=+3251.668468928" watchObservedRunningTime="2025-12-06 07:51:29.271876257 +0000 UTC m=+3251.673265127" Dec 06 07:51:34 crc kubenswrapper[4895]: I1206 07:51:34.748645 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:34 crc kubenswrapper[4895]: I1206 07:51:34.749432 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:34 crc kubenswrapper[4895]: I1206 07:51:34.821052 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:35 crc kubenswrapper[4895]: I1206 07:51:35.375226 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:35 crc kubenswrapper[4895]: I1206 07:51:35.437028 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bqdt2"] Dec 06 07:51:37 crc kubenswrapper[4895]: I1206 07:51:37.338275 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bqdt2" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="registry-server" containerID="cri-o://f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad" gracePeriod=2 Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.798318 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.813093 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-utilities\") pod \"d128803f-6796-44a4-a889-8fc23c60aa56\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.813161 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w5w9\" (UniqueName: \"kubernetes.io/projected/d128803f-6796-44a4-a889-8fc23c60aa56-kube-api-access-6w5w9\") pod \"d128803f-6796-44a4-a889-8fc23c60aa56\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.813179 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-catalog-content\") pod \"d128803f-6796-44a4-a889-8fc23c60aa56\" (UID: \"d128803f-6796-44a4-a889-8fc23c60aa56\") " Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.814912 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-utilities" (OuterVolumeSpecName: "utilities") pod "d128803f-6796-44a4-a889-8fc23c60aa56" (UID: "d128803f-6796-44a4-a889-8fc23c60aa56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.822735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d128803f-6796-44a4-a889-8fc23c60aa56-kube-api-access-6w5w9" (OuterVolumeSpecName: "kube-api-access-6w5w9") pod "d128803f-6796-44a4-a889-8fc23c60aa56" (UID: "d128803f-6796-44a4-a889-8fc23c60aa56"). InnerVolumeSpecName "kube-api-access-6w5w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.879368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d128803f-6796-44a4-a889-8fc23c60aa56" (UID: "d128803f-6796-44a4-a889-8fc23c60aa56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.915181 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.915275 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w5w9\" (UniqueName: \"kubernetes.io/projected/d128803f-6796-44a4-a889-8fc23c60aa56-kube-api-access-6w5w9\") on node \"crc\" DevicePath \"\"" Dec 06 07:51:38 crc kubenswrapper[4895]: I1206 07:51:38.915291 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d128803f-6796-44a4-a889-8fc23c60aa56-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.053183 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:51:39 crc kubenswrapper[4895]: E1206 07:51:39.053499 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.369013 4895 generic.go:334] "Generic (PLEG): container finished" podID="d128803f-6796-44a4-a889-8fc23c60aa56" containerID="f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad" exitCode=0 Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.369187 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqdt2" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.369199 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerDied","Data":"f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad"} Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.369300 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqdt2" event={"ID":"d128803f-6796-44a4-a889-8fc23c60aa56","Type":"ContainerDied","Data":"68189c6ffdaf16bd541f7eb64bd4b2698597c842cb67e71378232a4f9bcaf9db"} Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.369376 4895 scope.go:117] "RemoveContainer" containerID="f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.395620 4895 scope.go:117] "RemoveContainer" containerID="cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.418879 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bqdt2"] Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.428323 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bqdt2"] Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.440238 4895 scope.go:117] "RemoveContainer" containerID="3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.463275 4895 scope.go:117] "RemoveContainer" containerID="f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad" Dec 06 07:51:39 crc kubenswrapper[4895]: E1206 07:51:39.463926 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad\": container with ID starting with f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad not found: ID does not exist" containerID="f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.463971 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad"} err="failed to get container status \"f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad\": rpc error: code = NotFound desc = could not find container \"f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad\": container with ID starting with f0a6356ff910fa8735e00540836b1aafa60233c9b1a400c7c40cb9958d922aad not found: ID does not exist" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.464000 4895 scope.go:117] "RemoveContainer" containerID="cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1" Dec 06 07:51:39 crc kubenswrapper[4895]: E1206 07:51:39.464385 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1\": container with ID starting with cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1 not found: ID does not exist" containerID="cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.464452 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1"} err="failed to get container status \"cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1\": rpc error: code = NotFound desc = could not find container \"cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1\": container with ID starting with cfe9fbc8331a7219939f635ebc2a07a423ec8bc40dd9c8b12a483bf4da70b6d1 not found: ID does not exist" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.464613 4895 scope.go:117] "RemoveContainer" containerID="3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5" Dec 06 07:51:39 crc kubenswrapper[4895]: E1206 07:51:39.465087 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5\": container with ID starting with 3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5 not found: ID does not exist" containerID="3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5" Dec 06 07:51:39 crc kubenswrapper[4895]: I1206 07:51:39.465144 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5"} err="failed to get container status \"3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5\": rpc error: code = NotFound desc = could not find container \"3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5\": container with ID starting with 3ef8363e525145117135655cf7980c0599d5e19a4facea9345117819da1a03f5 not found: ID does not exist" Dec 06 07:51:40 crc kubenswrapper[4895]: I1206 07:51:40.061326 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" path="/var/lib/kubelet/pods/d128803f-6796-44a4-a889-8fc23c60aa56/volumes" Dec 06 07:51:53 crc kubenswrapper[4895]: I1206 07:51:53.050582 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:51:53 crc kubenswrapper[4895]: E1206 07:51:53.051307 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:52:07 crc kubenswrapper[4895]: I1206 07:52:07.050550 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:52:07 crc kubenswrapper[4895]: E1206 07:52:07.051335 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:52:22 crc kubenswrapper[4895]: I1206 07:52:22.051167 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:52:22 crc kubenswrapper[4895]: E1206 07:52:22.051892 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:52:37 crc kubenswrapper[4895]: I1206 07:52:37.050998 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:52:37 crc kubenswrapper[4895]: I1206 07:52:37.902900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"2128eb14a3e6530a4aa7b78e4bb4a28ab8af0804357cc372a71bd6af4e366a7a"} Dec 06 07:54:59 crc kubenswrapper[4895]: I1206 07:54:59.695896 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:54:59 crc kubenswrapper[4895]: I1206 07:54:59.696628 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.353196 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rfj4"] Dec 06 07:55:28 crc kubenswrapper[4895]: E1206 07:55:28.354010 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="extract-utilities" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.354025 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="extract-utilities" Dec 06 07:55:28 crc kubenswrapper[4895]: E1206 07:55:28.354045 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="registry-server" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.354053 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="registry-server" Dec 06 07:55:28 crc kubenswrapper[4895]: E1206 07:55:28.354063 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="extract-content" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.354072 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="extract-content" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.355013 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d128803f-6796-44a4-a889-8fc23c60aa56" containerName="registry-server" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.357292 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.360246 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rfj4"] Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.444924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zs7\" (UniqueName: \"kubernetes.io/projected/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-kube-api-access-b7zs7\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.445014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-utilities\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.445125 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-catalog-content\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.546925 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-catalog-content\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.547028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7zs7\" (UniqueName: \"kubernetes.io/projected/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-kube-api-access-b7zs7\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.547070 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-utilities\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.547440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-catalog-content\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.547573 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-utilities\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.577791 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7zs7\" (UniqueName: \"kubernetes.io/projected/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-kube-api-access-b7zs7\") pod \"redhat-marketplace-5rfj4\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:28 crc kubenswrapper[4895]: I1206 07:55:28.686966 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:29 crc kubenswrapper[4895]: I1206 07:55:29.165990 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rfj4"] Dec 06 07:55:29 crc kubenswrapper[4895]: I1206 07:55:29.401895 4895 generic.go:334] "Generic (PLEG): container finished" podID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerID="d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644" exitCode=0 Dec 06 07:55:29 crc kubenswrapper[4895]: I1206 07:55:29.401999 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rfj4" event={"ID":"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40","Type":"ContainerDied","Data":"d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644"} Dec 06 07:55:29 crc kubenswrapper[4895]: I1206 07:55:29.403137 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rfj4" event={"ID":"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40","Type":"ContainerStarted","Data":"0ab9278e8690ed6c3e9deb8c4e77a17845876e794255648ba14796ab55169952"} Dec 06 07:55:29 crc kubenswrapper[4895]: I1206 07:55:29.695717 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:55:29 crc kubenswrapper[4895]: I1206 07:55:29.696196 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:55:30 crc kubenswrapper[4895]: I1206 07:55:30.414166 4895 generic.go:334] "Generic (PLEG): container finished" podID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerID="bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7" exitCode=0 Dec 06 07:55:30 crc kubenswrapper[4895]: I1206 07:55:30.414232 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rfj4" event={"ID":"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40","Type":"ContainerDied","Data":"bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7"} Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.427287 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rfj4" event={"ID":"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40","Type":"ContainerStarted","Data":"ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8"} Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.469410 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rfj4" podStartSLOduration=2.025727582 podStartE2EDuration="3.469384056s" podCreationTimestamp="2025-12-06 07:55:28 +0000 UTC" firstStartedPulling="2025-12-06 07:55:29.404898519 +0000 UTC m=+3491.806287389" lastFinishedPulling="2025-12-06 07:55:30.848554993 +0000 UTC m=+3493.249943863" observedRunningTime="2025-12-06 07:55:31.460308562 +0000 UTC m=+3493.861697432" watchObservedRunningTime="2025-12-06 07:55:31.469384056 +0000 UTC m=+3493.870772946" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.533237 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d976v"] Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.535030 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.594648 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d976v"] Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.694604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-utilities\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.694677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdkd\" (UniqueName: \"kubernetes.io/projected/1ff79381-4161-4082-a5e9-b5855a3319fd-kube-api-access-lvdkd\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.694792 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-catalog-content\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.795677 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-utilities\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.795962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdkd\" (UniqueName: \"kubernetes.io/projected/1ff79381-4161-4082-a5e9-b5855a3319fd-kube-api-access-lvdkd\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.796131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-catalog-content\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.796637 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-catalog-content\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.796940 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-utilities\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.827127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdkd\" (UniqueName: \"kubernetes.io/projected/1ff79381-4161-4082-a5e9-b5855a3319fd-kube-api-access-lvdkd\") pod \"community-operators-d976v\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:31 crc kubenswrapper[4895]: I1206 07:55:31.903239 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:32 crc kubenswrapper[4895]: I1206 07:55:32.380369 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d976v"] Dec 06 07:55:32 crc kubenswrapper[4895]: I1206 07:55:32.436119 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerStarted","Data":"2f248560450a3d514617c9c59e6b20d792108587ae17d3339d998316aaad53e7"} Dec 06 07:55:33 crc kubenswrapper[4895]: I1206 07:55:33.447536 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerID="c3d35ef05ceb35a0598b0ea0d7c88ac04f2f36f80d70a56da36eacbbc1c19337" exitCode=0 Dec 06 07:55:33 crc kubenswrapper[4895]: I1206 07:55:33.447610 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerDied","Data":"c3d35ef05ceb35a0598b0ea0d7c88ac04f2f36f80d70a56da36eacbbc1c19337"} Dec 06 07:55:34 crc kubenswrapper[4895]: I1206 07:55:34.459985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerStarted","Data":"0e232390f5877a73c221d4dc563df3e31a07957fccde9de9e135b2f8b6a51b6e"} Dec 06 07:55:35 crc kubenswrapper[4895]: I1206 07:55:35.476090 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerID="0e232390f5877a73c221d4dc563df3e31a07957fccde9de9e135b2f8b6a51b6e" exitCode=0 Dec 06 07:55:35 crc kubenswrapper[4895]: I1206 07:55:35.476161 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerDied","Data":"0e232390f5877a73c221d4dc563df3e31a07957fccde9de9e135b2f8b6a51b6e"} Dec 06 07:55:36 crc kubenswrapper[4895]: I1206 07:55:36.491819 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerStarted","Data":"e842720de23e1831c7619acc6b82937a9553502cb22ec8d66dd2aa4901bf2a53"} Dec 06 07:55:36 crc kubenswrapper[4895]: I1206 07:55:36.528017 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d976v" podStartSLOduration=2.886653115 podStartE2EDuration="5.527995883s" podCreationTimestamp="2025-12-06 07:55:31 +0000 UTC" firstStartedPulling="2025-12-06 07:55:33.450323549 +0000 UTC m=+3495.851712419" lastFinishedPulling="2025-12-06 07:55:36.091666307 +0000 UTC m=+3498.493055187" observedRunningTime="2025-12-06 07:55:36.522126825 +0000 UTC m=+3498.923515725" watchObservedRunningTime="2025-12-06 07:55:36.527995883 +0000 UTC m=+3498.929384773" Dec 06 07:55:38 crc kubenswrapper[4895]: I1206 07:55:38.687528 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:38 crc kubenswrapper[4895]: I1206 07:55:38.687581 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:38 crc kubenswrapper[4895]: I1206 07:55:38.733688 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:39 crc kubenswrapper[4895]: I1206 07:55:39.559312 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:41 crc kubenswrapper[4895]: I1206 07:55:41.904462 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:41 crc kubenswrapper[4895]: I1206 07:55:41.904588 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:41 crc kubenswrapper[4895]: I1206 07:55:41.976047 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:42 crc kubenswrapper[4895]: I1206 07:55:42.604528 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:43 crc kubenswrapper[4895]: I1206 07:55:43.528858 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rfj4"] Dec 06 07:55:43 crc kubenswrapper[4895]: I1206 07:55:43.529193 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rfj4" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="registry-server" containerID="cri-o://ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8" gracePeriod=2 Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.495323 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.558645 4895 generic.go:334] "Generic (PLEG): container finished" podID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerID="ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8" exitCode=0 Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.558701 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rfj4" event={"ID":"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40","Type":"ContainerDied","Data":"ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8"} Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.558733 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rfj4" event={"ID":"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40","Type":"ContainerDied","Data":"0ab9278e8690ed6c3e9deb8c4e77a17845876e794255648ba14796ab55169952"} Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.558756 4895 scope.go:117] "RemoveContainer" containerID="ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.558904 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rfj4" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.578895 4895 scope.go:117] "RemoveContainer" containerID="bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.599921 4895 scope.go:117] "RemoveContainer" containerID="d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.604088 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-catalog-content\") pod \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.604178 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-utilities\") pod \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.604268 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7zs7\" (UniqueName: \"kubernetes.io/projected/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-kube-api-access-b7zs7\") pod \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\" (UID: \"e93d5a4b-5e85-4504-b31c-0f4b69dd6e40\") " Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.606290 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-utilities" (OuterVolumeSpecName: "utilities") pod "e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" (UID: "e93d5a4b-5e85-4504-b31c-0f4b69dd6e40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.610287 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-kube-api-access-b7zs7" (OuterVolumeSpecName: "kube-api-access-b7zs7") pod "e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" (UID: "e93d5a4b-5e85-4504-b31c-0f4b69dd6e40"). InnerVolumeSpecName "kube-api-access-b7zs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.628415 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" (UID: "e93d5a4b-5e85-4504-b31c-0f4b69dd6e40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.651927 4895 scope.go:117] "RemoveContainer" containerID="ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8" Dec 06 07:55:44 crc kubenswrapper[4895]: E1206 07:55:44.652604 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8\": container with ID starting with ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8 not found: ID does not exist" containerID="ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.652658 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8"} err="failed to get container status \"ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8\": rpc error: code = NotFound desc = could not find container \"ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8\": container with ID starting with ce70a1edf5e656a50608305d135d1eed2ff9e925a72ed6121c1b280f7827c4a8 not found: ID does not exist" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.652686 4895 scope.go:117] "RemoveContainer" containerID="bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7" Dec 06 07:55:44 crc kubenswrapper[4895]: E1206 07:55:44.653122 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7\": container with ID starting with bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7 not found: ID does not exist" containerID="bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.653149 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7"} err="failed to get container status \"bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7\": rpc error: code = NotFound desc = could not find container \"bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7\": container with ID starting with bce9679d4e6ecaf8d87ae1fa051142c03cf60467d959eb92177b1421bb3256a7 not found: ID does not exist" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.653168 4895 scope.go:117] "RemoveContainer" containerID="d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644" Dec 06 07:55:44 crc kubenswrapper[4895]: E1206 07:55:44.653742 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644\": container with ID starting with d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644 not found: ID does not exist" containerID="d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.653808 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644"} err="failed to get container status \"d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644\": rpc error: code = NotFound desc = could not find container \"d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644\": container with ID starting with d43dff593bfe0237012d2574759361d032cff34243863c8ae0e27900ad37a644 not found: ID does not exist" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.705618 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.705655 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.705665 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7zs7\" (UniqueName: \"kubernetes.io/projected/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40-kube-api-access-b7zs7\") on node \"crc\" DevicePath \"\"" Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.900086 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rfj4"] Dec 06 07:55:44 crc kubenswrapper[4895]: I1206 07:55:44.907403 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rfj4"] Dec 06 07:55:46 crc kubenswrapper[4895]: I1206 07:55:46.068184 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" path="/var/lib/kubelet/pods/e93d5a4b-5e85-4504-b31c-0f4b69dd6e40/volumes" Dec 06 07:55:47 crc kubenswrapper[4895]: I1206 07:55:47.129322 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d976v"] Dec 06 07:55:47 crc kubenswrapper[4895]: I1206 07:55:47.129659 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d976v" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="registry-server" containerID="cri-o://e842720de23e1831c7619acc6b82937a9553502cb22ec8d66dd2aa4901bf2a53" gracePeriod=2 Dec 06 07:55:47 crc kubenswrapper[4895]: I1206 07:55:47.591725 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerID="e842720de23e1831c7619acc6b82937a9553502cb22ec8d66dd2aa4901bf2a53" exitCode=0 Dec 06 07:55:47 crc kubenswrapper[4895]: I1206 07:55:47.591779 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerDied","Data":"e842720de23e1831c7619acc6b82937a9553502cb22ec8d66dd2aa4901bf2a53"} Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.095811 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.261244 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-catalog-content\") pod \"1ff79381-4161-4082-a5e9-b5855a3319fd\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.261402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-utilities\") pod \"1ff79381-4161-4082-a5e9-b5855a3319fd\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.261574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvdkd\" (UniqueName: \"kubernetes.io/projected/1ff79381-4161-4082-a5e9-b5855a3319fd-kube-api-access-lvdkd\") pod \"1ff79381-4161-4082-a5e9-b5855a3319fd\" (UID: \"1ff79381-4161-4082-a5e9-b5855a3319fd\") " Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.263060 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-utilities" (OuterVolumeSpecName: "utilities") pod "1ff79381-4161-4082-a5e9-b5855a3319fd" (UID: "1ff79381-4161-4082-a5e9-b5855a3319fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.271944 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff79381-4161-4082-a5e9-b5855a3319fd-kube-api-access-lvdkd" (OuterVolumeSpecName: "kube-api-access-lvdkd") pod "1ff79381-4161-4082-a5e9-b5855a3319fd" (UID: "1ff79381-4161-4082-a5e9-b5855a3319fd"). InnerVolumeSpecName "kube-api-access-lvdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.311065 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ff79381-4161-4082-a5e9-b5855a3319fd" (UID: "1ff79381-4161-4082-a5e9-b5855a3319fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.363640 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvdkd\" (UniqueName: \"kubernetes.io/projected/1ff79381-4161-4082-a5e9-b5855a3319fd-kube-api-access-lvdkd\") on node \"crc\" DevicePath \"\"" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.363680 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.363691 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff79381-4161-4082-a5e9-b5855a3319fd-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.602807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d976v" event={"ID":"1ff79381-4161-4082-a5e9-b5855a3319fd","Type":"ContainerDied","Data":"2f248560450a3d514617c9c59e6b20d792108587ae17d3339d998316aaad53e7"} Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.603388 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d976v" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.603467 4895 scope.go:117] "RemoveContainer" containerID="e842720de23e1831c7619acc6b82937a9553502cb22ec8d66dd2aa4901bf2a53" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.624980 4895 scope.go:117] "RemoveContainer" containerID="0e232390f5877a73c221d4dc563df3e31a07957fccde9de9e135b2f8b6a51b6e" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.652565 4895 scope.go:117] "RemoveContainer" containerID="c3d35ef05ceb35a0598b0ea0d7c88ac04f2f36f80d70a56da36eacbbc1c19337" Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.690615 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d976v"] Dec 06 07:55:48 crc kubenswrapper[4895]: I1206 07:55:48.697872 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d976v"] Dec 06 07:55:50 crc kubenswrapper[4895]: I1206 07:55:50.066458 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" path="/var/lib/kubelet/pods/1ff79381-4161-4082-a5e9-b5855a3319fd/volumes" Dec 06 07:55:59 crc kubenswrapper[4895]: I1206 07:55:59.696447 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:55:59 crc kubenswrapper[4895]: I1206 07:55:59.697099 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:55:59 crc kubenswrapper[4895]: I1206 07:55:59.697149 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:55:59 crc kubenswrapper[4895]: I1206 07:55:59.697803 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2128eb14a3e6530a4aa7b78e4bb4a28ab8af0804357cc372a71bd6af4e366a7a"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:55:59 crc kubenswrapper[4895]: I1206 07:55:59.697863 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://2128eb14a3e6530a4aa7b78e4bb4a28ab8af0804357cc372a71bd6af4e366a7a" gracePeriod=600 Dec 06 07:56:00 crc kubenswrapper[4895]: I1206 07:56:00.713630 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="2128eb14a3e6530a4aa7b78e4bb4a28ab8af0804357cc372a71bd6af4e366a7a" exitCode=0 Dec 06 07:56:00 crc kubenswrapper[4895]: I1206 07:56:00.713737 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"2128eb14a3e6530a4aa7b78e4bb4a28ab8af0804357cc372a71bd6af4e366a7a"} Dec 06 07:56:00 crc kubenswrapper[4895]: I1206 07:56:00.713982 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d"} Dec 06 07:56:00 crc kubenswrapper[4895]: I1206 07:56:00.714003 4895 scope.go:117] "RemoveContainer" containerID="647b4895a1d3c283430565bbc2240b0588a37af32aeb461a403cd18cb84e4212" Dec 06 07:57:59 crc kubenswrapper[4895]: I1206 07:57:59.696431 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:57:59 crc kubenswrapper[4895]: I1206 07:57:59.697148 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.363650 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8qb5"] Dec 06 07:58:20 crc kubenswrapper[4895]: E1206 07:58:20.364933 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="extract-utilities" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.364955 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="extract-utilities" Dec 06 07:58:20 crc kubenswrapper[4895]: E1206 07:58:20.364983 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="extract-utilities" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.364998 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="extract-utilities" Dec 06 07:58:20 crc kubenswrapper[4895]: E1206 07:58:20.365028 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="registry-server" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.365043 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="registry-server" Dec 06 07:58:20 crc kubenswrapper[4895]: E1206 07:58:20.365061 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="registry-server" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.365071 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="registry-server" Dec 06 07:58:20 crc kubenswrapper[4895]: E1206 07:58:20.365100 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="extract-content" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.365110 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="extract-content" Dec 06 07:58:20 crc kubenswrapper[4895]: E1206 07:58:20.365130 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="extract-content" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.365140 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="extract-content" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.365363 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93d5a4b-5e85-4504-b31c-0f4b69dd6e40" containerName="registry-server" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.365389 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff79381-4161-4082-a5e9-b5855a3319fd" containerName="registry-server" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.367460 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.395138 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb5"] Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.457261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-catalog-content\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.457340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-utilities\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.457379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9j7q\" (UniqueName: \"kubernetes.io/projected/6216dc6c-1fc7-4b92-8355-e0e28d446136-kube-api-access-v9j7q\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.558829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-catalog-content\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.558907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-utilities\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.558938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9j7q\" (UniqueName: \"kubernetes.io/projected/6216dc6c-1fc7-4b92-8355-e0e28d446136-kube-api-access-v9j7q\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.559544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-catalog-content\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.559633 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-utilities\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.583152 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9j7q\" (UniqueName: \"kubernetes.io/projected/6216dc6c-1fc7-4b92-8355-e0e28d446136-kube-api-access-v9j7q\") pod \"redhat-operators-b8qb5\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.700288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:20 crc kubenswrapper[4895]: I1206 07:58:20.948598 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb5"] Dec 06 07:58:21 crc kubenswrapper[4895]: I1206 07:58:21.937284 4895 generic.go:334] "Generic (PLEG): container finished" podID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerID="3246d5ba7fd9ec3727c3e3355cfc3a8a93d7267df534d0e115340a9ac57654e6" exitCode=0 Dec 06 07:58:21 crc kubenswrapper[4895]: I1206 07:58:21.937310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb5" event={"ID":"6216dc6c-1fc7-4b92-8355-e0e28d446136","Type":"ContainerDied","Data":"3246d5ba7fd9ec3727c3e3355cfc3a8a93d7267df534d0e115340a9ac57654e6"} Dec 06 07:58:21 crc kubenswrapper[4895]: I1206 07:58:21.938191 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb5" event={"ID":"6216dc6c-1fc7-4b92-8355-e0e28d446136","Type":"ContainerStarted","Data":"d8ef49daad076d2b5679678b7e83fb5595afc7228c8ee50436201e753ac95224"} Dec 06 07:58:21 crc kubenswrapper[4895]: I1206 07:58:21.941014 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:58:23 crc kubenswrapper[4895]: I1206 07:58:23.966506 4895 generic.go:334] "Generic (PLEG): container finished" podID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerID="77f3b68eac75a2452a9a44fba7d9a3eef7c2c0fd22498c3431ca0f7fc57d3c3a" exitCode=0 Dec 06 07:58:23 crc kubenswrapper[4895]: I1206 07:58:23.966581 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb5" event={"ID":"6216dc6c-1fc7-4b92-8355-e0e28d446136","Type":"ContainerDied","Data":"77f3b68eac75a2452a9a44fba7d9a3eef7c2c0fd22498c3431ca0f7fc57d3c3a"} Dec 06 07:58:24 crc kubenswrapper[4895]: I1206 07:58:24.980116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb5" event={"ID":"6216dc6c-1fc7-4b92-8355-e0e28d446136","Type":"ContainerStarted","Data":"87478eb876dadccb611a65f222d7347a55b321057a95a4473d6200a2fc9628cd"} Dec 06 07:58:25 crc kubenswrapper[4895]: I1206 07:58:25.002433 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8qb5" podStartSLOduration=2.5271880810000003 podStartE2EDuration="5.002408327s" podCreationTimestamp="2025-12-06 07:58:20 +0000 UTC" firstStartedPulling="2025-12-06 07:58:21.940697202 +0000 UTC m=+3664.342086072" lastFinishedPulling="2025-12-06 07:58:24.415917438 +0000 UTC m=+3666.817306318" observedRunningTime="2025-12-06 07:58:24.996924651 +0000 UTC m=+3667.398313561" watchObservedRunningTime="2025-12-06 07:58:25.002408327 +0000 UTC m=+3667.403797207" Dec 06 07:58:29 crc kubenswrapper[4895]: I1206 07:58:29.695625 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:58:29 crc kubenswrapper[4895]: I1206 07:58:29.696016 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:58:30 crc kubenswrapper[4895]: I1206 07:58:30.701181 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:30 crc kubenswrapper[4895]: I1206 07:58:30.701275 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:30 crc kubenswrapper[4895]: I1206 07:58:30.778259 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:31 crc kubenswrapper[4895]: I1206 07:58:31.084999 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:31 crc kubenswrapper[4895]: I1206 07:58:31.135395 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb5"] Dec 06 07:58:33 crc kubenswrapper[4895]: I1206 07:58:33.042314 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b8qb5" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="registry-server" containerID="cri-o://87478eb876dadccb611a65f222d7347a55b321057a95a4473d6200a2fc9628cd" gracePeriod=2 Dec 06 07:58:34 crc kubenswrapper[4895]: I1206 07:58:34.054624 4895 generic.go:334] "Generic (PLEG): container finished" podID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerID="87478eb876dadccb611a65f222d7347a55b321057a95a4473d6200a2fc9628cd" exitCode=0 Dec 06 07:58:34 crc kubenswrapper[4895]: I1206 07:58:34.064028 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb5" event={"ID":"6216dc6c-1fc7-4b92-8355-e0e28d446136","Type":"ContainerDied","Data":"87478eb876dadccb611a65f222d7347a55b321057a95a4473d6200a2fc9628cd"} Dec 06 07:58:34 crc kubenswrapper[4895]: I1206 07:58:34.938125 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.069157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb5" event={"ID":"6216dc6c-1fc7-4b92-8355-e0e28d446136","Type":"ContainerDied","Data":"d8ef49daad076d2b5679678b7e83fb5595afc7228c8ee50436201e753ac95224"} Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.069221 4895 scope.go:117] "RemoveContainer" containerID="87478eb876dadccb611a65f222d7347a55b321057a95a4473d6200a2fc9628cd" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.069370 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb5" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.083376 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-utilities\") pod \"6216dc6c-1fc7-4b92-8355-e0e28d446136\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.083540 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-catalog-content\") pod \"6216dc6c-1fc7-4b92-8355-e0e28d446136\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.083598 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9j7q\" (UniqueName: \"kubernetes.io/projected/6216dc6c-1fc7-4b92-8355-e0e28d446136-kube-api-access-v9j7q\") pod \"6216dc6c-1fc7-4b92-8355-e0e28d446136\" (UID: \"6216dc6c-1fc7-4b92-8355-e0e28d446136\") " Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.086014 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-utilities" (OuterVolumeSpecName: "utilities") pod "6216dc6c-1fc7-4b92-8355-e0e28d446136" (UID: "6216dc6c-1fc7-4b92-8355-e0e28d446136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.089113 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6216dc6c-1fc7-4b92-8355-e0e28d446136-kube-api-access-v9j7q" (OuterVolumeSpecName: "kube-api-access-v9j7q") pod "6216dc6c-1fc7-4b92-8355-e0e28d446136" (UID: "6216dc6c-1fc7-4b92-8355-e0e28d446136"). InnerVolumeSpecName "kube-api-access-v9j7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.098740 4895 scope.go:117] "RemoveContainer" containerID="77f3b68eac75a2452a9a44fba7d9a3eef7c2c0fd22498c3431ca0f7fc57d3c3a" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.134826 4895 scope.go:117] "RemoveContainer" containerID="3246d5ba7fd9ec3727c3e3355cfc3a8a93d7267df534d0e115340a9ac57654e6" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.185444 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9j7q\" (UniqueName: \"kubernetes.io/projected/6216dc6c-1fc7-4b92-8355-e0e28d446136-kube-api-access-v9j7q\") on node \"crc\" DevicePath \"\"" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.185503 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.209369 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6216dc6c-1fc7-4b92-8355-e0e28d446136" (UID: "6216dc6c-1fc7-4b92-8355-e0e28d446136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.287249 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6216dc6c-1fc7-4b92-8355-e0e28d446136-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.419186 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb5"] Dec 06 07:58:35 crc kubenswrapper[4895]: I1206 07:58:35.427296 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb5"] Dec 06 07:58:36 crc kubenswrapper[4895]: I1206 07:58:36.061429 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" path="/var/lib/kubelet/pods/6216dc6c-1fc7-4b92-8355-e0e28d446136/volumes" Dec 06 07:58:59 crc kubenswrapper[4895]: I1206 07:58:59.696214 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:58:59 crc kubenswrapper[4895]: I1206 07:58:59.696851 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:58:59 crc kubenswrapper[4895]: I1206 07:58:59.696918 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 07:58:59 crc kubenswrapper[4895]: I1206 07:58:59.697893 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:58:59 crc kubenswrapper[4895]: I1206 07:58:59.698000 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" gracePeriod=600 Dec 06 07:58:59 crc kubenswrapper[4895]: E1206 07:58:59.835974 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:59:00 crc kubenswrapper[4895]: I1206 07:59:00.292228 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" exitCode=0 Dec 06 07:59:00 crc kubenswrapper[4895]: I1206 07:59:00.292276 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d"} Dec 06 07:59:00 crc kubenswrapper[4895]: I1206 07:59:00.292319 4895 scope.go:117] "RemoveContainer" containerID="2128eb14a3e6530a4aa7b78e4bb4a28ab8af0804357cc372a71bd6af4e366a7a" Dec 06 07:59:00 crc kubenswrapper[4895]: I1206 07:59:00.292753 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 07:59:00 crc kubenswrapper[4895]: E1206 07:59:00.292982 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:59:12 crc kubenswrapper[4895]: I1206 07:59:12.050813 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 07:59:12 crc kubenswrapper[4895]: E1206 07:59:12.051706 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:59:26 crc kubenswrapper[4895]: I1206 07:59:26.050657 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 07:59:26 crc kubenswrapper[4895]: E1206 07:59:26.053309 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:59:41 crc kubenswrapper[4895]: I1206 07:59:41.050945 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 07:59:41 crc kubenswrapper[4895]: E1206 07:59:41.052011 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 07:59:53 crc kubenswrapper[4895]: I1206 07:59:53.050667 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 07:59:53 crc kubenswrapper[4895]: E1206 07:59:53.051307 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.155011 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2"] Dec 06 08:00:00 crc kubenswrapper[4895]: E1206 08:00:00.155916 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="registry-server" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.155934 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="registry-server" Dec 06 08:00:00 crc kubenswrapper[4895]: E1206 08:00:00.155949 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="extract-utilities" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.155957 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="extract-utilities" Dec 06 08:00:00 crc kubenswrapper[4895]: E1206 08:00:00.155990 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="extract-content" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.156001 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="extract-content" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.156182 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6216dc6c-1fc7-4b92-8355-e0e28d446136" containerName="registry-server" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.156704 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.158376 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.158732 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.174235 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2"] Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.276707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lzq\" (UniqueName: \"kubernetes.io/projected/435028a4-5fa1-4981-a7fb-37615dfd3865-kube-api-access-q9lzq\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.277036 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435028a4-5fa1-4981-a7fb-37615dfd3865-secret-volume\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.277085 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435028a4-5fa1-4981-a7fb-37615dfd3865-config-volume\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.378108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lzq\" (UniqueName: \"kubernetes.io/projected/435028a4-5fa1-4981-a7fb-37615dfd3865-kube-api-access-q9lzq\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.378244 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435028a4-5fa1-4981-a7fb-37615dfd3865-secret-volume\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.378319 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435028a4-5fa1-4981-a7fb-37615dfd3865-config-volume\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.379633 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435028a4-5fa1-4981-a7fb-37615dfd3865-config-volume\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.385790 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435028a4-5fa1-4981-a7fb-37615dfd3865-secret-volume\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.396305 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lzq\" (UniqueName: \"kubernetes.io/projected/435028a4-5fa1-4981-a7fb-37615dfd3865-kube-api-access-q9lzq\") pod \"collect-profiles-29416800-9sww2\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.482423 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:00 crc kubenswrapper[4895]: I1206 08:00:00.996236 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2"] Dec 06 08:00:01 crc kubenswrapper[4895]: I1206 08:00:01.851327 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" event={"ID":"435028a4-5fa1-4981-a7fb-37615dfd3865","Type":"ContainerDied","Data":"d7781bcd9c0bb3a8617f0ae2bf8ab16251137289f05ddf431aef4ca0094adfa2"} Dec 06 08:00:01 crc kubenswrapper[4895]: I1206 08:00:01.851257 4895 generic.go:334] "Generic (PLEG): container finished" podID="435028a4-5fa1-4981-a7fb-37615dfd3865" containerID="d7781bcd9c0bb3a8617f0ae2bf8ab16251137289f05ddf431aef4ca0094adfa2" exitCode=0 Dec 06 08:00:01 crc kubenswrapper[4895]: I1206 08:00:01.852004 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" event={"ID":"435028a4-5fa1-4981-a7fb-37615dfd3865","Type":"ContainerStarted","Data":"ccca8d496ce239433aecc4bbbad8b10de6cd05e20030eecd3a1b2ab6b5beaa8c"} Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.176843 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.327147 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435028a4-5fa1-4981-a7fb-37615dfd3865-config-volume\") pod \"435028a4-5fa1-4981-a7fb-37615dfd3865\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.327209 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lzq\" (UniqueName: \"kubernetes.io/projected/435028a4-5fa1-4981-a7fb-37615dfd3865-kube-api-access-q9lzq\") pod \"435028a4-5fa1-4981-a7fb-37615dfd3865\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.327315 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435028a4-5fa1-4981-a7fb-37615dfd3865-secret-volume\") pod \"435028a4-5fa1-4981-a7fb-37615dfd3865\" (UID: \"435028a4-5fa1-4981-a7fb-37615dfd3865\") " Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.327953 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435028a4-5fa1-4981-a7fb-37615dfd3865-config-volume" (OuterVolumeSpecName: "config-volume") pod "435028a4-5fa1-4981-a7fb-37615dfd3865" (UID: "435028a4-5fa1-4981-a7fb-37615dfd3865"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.334602 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435028a4-5fa1-4981-a7fb-37615dfd3865-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "435028a4-5fa1-4981-a7fb-37615dfd3865" (UID: "435028a4-5fa1-4981-a7fb-37615dfd3865"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.336563 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435028a4-5fa1-4981-a7fb-37615dfd3865-kube-api-access-q9lzq" (OuterVolumeSpecName: "kube-api-access-q9lzq") pod "435028a4-5fa1-4981-a7fb-37615dfd3865" (UID: "435028a4-5fa1-4981-a7fb-37615dfd3865"). InnerVolumeSpecName "kube-api-access-q9lzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.429154 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435028a4-5fa1-4981-a7fb-37615dfd3865-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.429205 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lzq\" (UniqueName: \"kubernetes.io/projected/435028a4-5fa1-4981-a7fb-37615dfd3865-kube-api-access-q9lzq\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.429227 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435028a4-5fa1-4981-a7fb-37615dfd3865-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.875239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" event={"ID":"435028a4-5fa1-4981-a7fb-37615dfd3865","Type":"ContainerDied","Data":"ccca8d496ce239433aecc4bbbad8b10de6cd05e20030eecd3a1b2ab6b5beaa8c"} Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.875297 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccca8d496ce239433aecc4bbbad8b10de6cd05e20030eecd3a1b2ab6b5beaa8c" Dec 06 08:00:03 crc kubenswrapper[4895]: I1206 08:00:03.875322 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2" Dec 06 08:00:04 crc kubenswrapper[4895]: I1206 08:00:04.309130 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b"] Dec 06 08:00:04 crc kubenswrapper[4895]: I1206 08:00:04.316106 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-f298b"] Dec 06 08:00:06 crc kubenswrapper[4895]: I1206 08:00:06.050830 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:00:06 crc kubenswrapper[4895]: E1206 08:00:06.051130 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:00:06 crc kubenswrapper[4895]: I1206 08:00:06.069838 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c8dd3d-28ae-4572-b7a0-60f26e47a4a3" path="/var/lib/kubelet/pods/42c8dd3d-28ae-4572-b7a0-60f26e47a4a3/volumes" Dec 06 08:00:12 crc kubenswrapper[4895]: I1206 08:00:12.506297 4895 scope.go:117] "RemoveContainer" containerID="a2027e2aff7e7b2ff5e79a18fc4f638269db35b081fbaf4eae568ec715ca475b" Dec 06 08:00:21 crc kubenswrapper[4895]: I1206 08:00:21.051768 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:00:21 crc kubenswrapper[4895]: E1206 08:00:21.053404 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:00:32 crc kubenswrapper[4895]: I1206 08:00:32.050861 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:00:32 crc kubenswrapper[4895]: E1206 08:00:32.051790 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:00:44 crc kubenswrapper[4895]: I1206 08:00:44.051281 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:00:44 crc kubenswrapper[4895]: E1206 08:00:44.052115 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:00:56 crc kubenswrapper[4895]: I1206 08:00:56.050541 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:00:56 crc kubenswrapper[4895]: E1206 08:00:56.051442 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:01:09 crc kubenswrapper[4895]: I1206 08:01:09.051130 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:01:09 crc kubenswrapper[4895]: E1206 08:01:09.051945 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:01:21 crc kubenswrapper[4895]: I1206 08:01:21.050925 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:01:21 crc kubenswrapper[4895]: E1206 08:01:21.051808 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:01:34 crc kubenswrapper[4895]: I1206 08:01:34.051563 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:01:34 crc kubenswrapper[4895]: E1206 08:01:34.052804 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:01:48 crc kubenswrapper[4895]: I1206 08:01:48.060929 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:01:48 crc kubenswrapper[4895]: E1206 08:01:48.062318 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:02:01 crc kubenswrapper[4895]: I1206 08:02:01.050709 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:02:01 crc kubenswrapper[4895]: E1206 08:02:01.051328 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:02:14 crc kubenswrapper[4895]: I1206 08:02:14.050556 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:02:14 crc kubenswrapper[4895]: E1206 08:02:14.051467 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:02:28 crc kubenswrapper[4895]: I1206 08:02:28.056431 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:02:28 crc kubenswrapper[4895]: E1206 08:02:28.060113 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.205349 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kdg7"] Dec 06 08:02:33 crc kubenswrapper[4895]: E1206 08:02:33.205827 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435028a4-5fa1-4981-a7fb-37615dfd3865" containerName="collect-profiles" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.205843 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="435028a4-5fa1-4981-a7fb-37615dfd3865" containerName="collect-profiles" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.206107 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="435028a4-5fa1-4981-a7fb-37615dfd3865" containerName="collect-profiles" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.208294 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.216524 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kdg7"] Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.264912 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5p7b\" (UniqueName: \"kubernetes.io/projected/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-kube-api-access-j5p7b\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.265062 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-utilities\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.265126 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-catalog-content\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.366304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5p7b\" (UniqueName: \"kubernetes.io/projected/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-kube-api-access-j5p7b\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.366377 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-utilities\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.366397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-catalog-content\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.366893 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-catalog-content\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.366964 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-utilities\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.394719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5p7b\" (UniqueName: \"kubernetes.io/projected/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-kube-api-access-j5p7b\") pod \"certified-operators-8kdg7\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:33 crc kubenswrapper[4895]: I1206 08:02:33.583058 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:34 crc kubenswrapper[4895]: I1206 08:02:34.059055 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kdg7"] Dec 06 08:02:34 crc kubenswrapper[4895]: I1206 08:02:34.296905 4895 generic.go:334] "Generic (PLEG): container finished" podID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerID="96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6" exitCode=0 Dec 06 08:02:34 crc kubenswrapper[4895]: I1206 08:02:34.296962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerDied","Data":"96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6"} Dec 06 08:02:34 crc kubenswrapper[4895]: I1206 08:02:34.297002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerStarted","Data":"026cd834c625bfcea8ea77b423ceb92cea88dc1463efc4dd7a86f5529cdc2d6f"} Dec 06 08:02:35 crc kubenswrapper[4895]: I1206 08:02:35.306170 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerStarted","Data":"ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6"} Dec 06 08:02:36 crc kubenswrapper[4895]: I1206 08:02:36.321966 4895 generic.go:334] "Generic (PLEG): container finished" podID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerID="ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6" exitCode=0 Dec 06 08:02:36 crc kubenswrapper[4895]: I1206 08:02:36.322048 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerDied","Data":"ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6"} Dec 06 08:02:37 crc kubenswrapper[4895]: I1206 08:02:37.334633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerStarted","Data":"d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6"} Dec 06 08:02:37 crc kubenswrapper[4895]: I1206 08:02:37.368148 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kdg7" podStartSLOduration=1.868179115 podStartE2EDuration="4.368124632s" podCreationTimestamp="2025-12-06 08:02:33 +0000 UTC" firstStartedPulling="2025-12-06 08:02:34.299592359 +0000 UTC m=+3916.700981229" lastFinishedPulling="2025-12-06 08:02:36.799537836 +0000 UTC m=+3919.200926746" observedRunningTime="2025-12-06 08:02:37.358344739 +0000 UTC m=+3919.759733669" watchObservedRunningTime="2025-12-06 08:02:37.368124632 +0000 UTC m=+3919.769513502" Dec 06 08:02:39 crc kubenswrapper[4895]: I1206 08:02:39.050913 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:02:39 crc kubenswrapper[4895]: E1206 08:02:39.051353 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:02:43 crc kubenswrapper[4895]: I1206 08:02:43.583586 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:43 crc kubenswrapper[4895]: I1206 08:02:43.584120 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:43 crc kubenswrapper[4895]: I1206 08:02:43.830558 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:44 crc kubenswrapper[4895]: I1206 08:02:44.434128 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:44 crc kubenswrapper[4895]: I1206 08:02:44.507003 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kdg7"] Dec 06 08:02:46 crc kubenswrapper[4895]: I1206 08:02:46.408894 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8kdg7" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="registry-server" containerID="cri-o://d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6" gracePeriod=2 Dec 06 08:02:47 crc kubenswrapper[4895]: I1206 08:02:47.994278 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.122257 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-utilities\") pod \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.122425 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5p7b\" (UniqueName: \"kubernetes.io/projected/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-kube-api-access-j5p7b\") pod \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.122710 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-catalog-content\") pod \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\" (UID: \"7bbda9df-6f78-4eb9-a32e-97f06b8389d8\") " Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.124125 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-utilities" (OuterVolumeSpecName: "utilities") pod "7bbda9df-6f78-4eb9-a32e-97f06b8389d8" (UID: "7bbda9df-6f78-4eb9-a32e-97f06b8389d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.133179 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-kube-api-access-j5p7b" (OuterVolumeSpecName: "kube-api-access-j5p7b") pod "7bbda9df-6f78-4eb9-a32e-97f06b8389d8" (UID: "7bbda9df-6f78-4eb9-a32e-97f06b8389d8"). InnerVolumeSpecName "kube-api-access-j5p7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.201467 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bbda9df-6f78-4eb9-a32e-97f06b8389d8" (UID: "7bbda9df-6f78-4eb9-a32e-97f06b8389d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.226089 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.226187 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.226210 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5p7b\" (UniqueName: \"kubernetes.io/projected/7bbda9df-6f78-4eb9-a32e-97f06b8389d8-kube-api-access-j5p7b\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.431299 4895 generic.go:334] "Generic (PLEG): container finished" podID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerID="d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6" exitCode=0 Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.431370 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerDied","Data":"d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6"} Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.431422 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdg7" event={"ID":"7bbda9df-6f78-4eb9-a32e-97f06b8389d8","Type":"ContainerDied","Data":"026cd834c625bfcea8ea77b423ceb92cea88dc1463efc4dd7a86f5529cdc2d6f"} Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.431444 4895 scope.go:117] "RemoveContainer" containerID="d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.431631 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdg7" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.468128 4895 scope.go:117] "RemoveContainer" containerID="ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.495627 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kdg7"] Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.509718 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8kdg7"] Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.515581 4895 scope.go:117] "RemoveContainer" containerID="96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.539858 4895 scope.go:117] "RemoveContainer" containerID="d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6" Dec 06 08:02:48 crc kubenswrapper[4895]: E1206 08:02:48.540374 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6\": container with ID starting with d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6 not found: ID does not exist" containerID="d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.540428 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6"} err="failed to get container status \"d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6\": rpc error: code = NotFound desc = could not find container \"d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6\": container with ID starting with d4b39758b5fb9ef970225c289a8fc21149daeaee6358f6780c2448001d650bf6 not found: ID does not exist" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.540466 4895 scope.go:117] "RemoveContainer" containerID="ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6" Dec 06 08:02:48 crc kubenswrapper[4895]: E1206 08:02:48.541039 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6\": container with ID starting with ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6 not found: ID does not exist" containerID="ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.541085 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6"} err="failed to get container status \"ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6\": rpc error: code = NotFound desc = could not find container \"ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6\": container with ID starting with ceca4287b620a2df0d41ceb923aa22279257e1203ec34f10033a4dae0e280aa6 not found: ID does not exist" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.541112 4895 scope.go:117] "RemoveContainer" containerID="96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6" Dec 06 08:02:48 crc kubenswrapper[4895]: E1206 08:02:48.541678 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6\": container with ID starting with 96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6 not found: ID does not exist" containerID="96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6" Dec 06 08:02:48 crc kubenswrapper[4895]: I1206 08:02:48.541717 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6"} err="failed to get container status \"96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6\": rpc error: code = NotFound desc = could not find container \"96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6\": container with ID starting with 96304dcca278d059698cfc776a855cb0212e7e2094cf71abb5bbb1d0dddf65d6 not found: ID does not exist" Dec 06 08:02:50 crc kubenswrapper[4895]: I1206 08:02:50.050939 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:02:50 crc kubenswrapper[4895]: E1206 08:02:50.051425 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:02:50 crc kubenswrapper[4895]: I1206 08:02:50.064869 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" path="/var/lib/kubelet/pods/7bbda9df-6f78-4eb9-a32e-97f06b8389d8/volumes" Dec 06 08:03:05 crc kubenswrapper[4895]: I1206 08:03:05.051017 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:03:05 crc kubenswrapper[4895]: E1206 08:03:05.052149 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:03:17 crc kubenswrapper[4895]: I1206 08:03:17.052064 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:03:17 crc kubenswrapper[4895]: E1206 08:03:17.053586 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:03:31 crc kubenswrapper[4895]: I1206 08:03:31.051827 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:03:31 crc kubenswrapper[4895]: E1206 08:03:31.052526 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:03:46 crc kubenswrapper[4895]: I1206 08:03:46.052543 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:03:46 crc kubenswrapper[4895]: E1206 08:03:46.053708 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:03:59 crc kubenswrapper[4895]: I1206 08:03:59.050737 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:03:59 crc kubenswrapper[4895]: E1206 08:03:59.051696 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:04:11 crc kubenswrapper[4895]: I1206 08:04:11.050769 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:04:12 crc kubenswrapper[4895]: I1206 08:04:12.237715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"f14b3cb57afdbf4c27c610c9a22f39fe6c0cf5629db0edb6a3f0f0b88bcc13bf"} Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.593970 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wxj6w"] Dec 06 08:05:33 crc kubenswrapper[4895]: E1206 08:05:33.595209 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="registry-server" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.595227 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="registry-server" Dec 06 08:05:33 crc kubenswrapper[4895]: E1206 08:05:33.595263 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="extract-utilities" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.595272 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="extract-utilities" Dec 06 08:05:33 crc kubenswrapper[4895]: E1206 08:05:33.595289 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="extract-content" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.595298 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="extract-content" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.599893 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbda9df-6f78-4eb9-a32e-97f06b8389d8" containerName="registry-server" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.601777 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.604289 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wxj6w"] Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.767660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdpp\" (UniqueName: \"kubernetes.io/projected/b29c8815-5e96-4ffd-a96e-5b390c8190ec-kube-api-access-xsdpp\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.768012 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-utilities\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.768068 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-catalog-content\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.869159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdpp\" (UniqueName: \"kubernetes.io/projected/b29c8815-5e96-4ffd-a96e-5b390c8190ec-kube-api-access-xsdpp\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.869209 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-utilities\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.869247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-catalog-content\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.869719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-catalog-content\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.869906 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-utilities\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.890079 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdpp\" (UniqueName: \"kubernetes.io/projected/b29c8815-5e96-4ffd-a96e-5b390c8190ec-kube-api-access-xsdpp\") pod \"redhat-marketplace-wxj6w\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:33 crc kubenswrapper[4895]: I1206 08:05:33.921356 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:34 crc kubenswrapper[4895]: I1206 08:05:34.418762 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wxj6w"] Dec 06 08:05:34 crc kubenswrapper[4895]: I1206 08:05:34.978512 4895 generic.go:334] "Generic (PLEG): container finished" podID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerID="a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8" exitCode=0 Dec 06 08:05:34 crc kubenswrapper[4895]: I1206 08:05:34.978601 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerDied","Data":"a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8"} Dec 06 08:05:34 crc kubenswrapper[4895]: I1206 08:05:34.978887 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerStarted","Data":"7d7885c29b09774d6fa57a0d935b577082d5711ffe164607619cfe59e87a7e2c"} Dec 06 08:05:34 crc kubenswrapper[4895]: I1206 08:05:34.982385 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:05:35 crc kubenswrapper[4895]: I1206 08:05:35.990369 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerStarted","Data":"da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9"} Dec 06 08:05:37 crc kubenswrapper[4895]: I1206 08:05:37.002898 4895 generic.go:334] "Generic (PLEG): container finished" podID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerID="da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9" exitCode=0 Dec 06 08:05:37 crc kubenswrapper[4895]: I1206 08:05:37.003132 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerDied","Data":"da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9"} Dec 06 08:05:38 crc kubenswrapper[4895]: I1206 08:05:38.015555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerStarted","Data":"b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4"} Dec 06 08:05:38 crc kubenswrapper[4895]: I1206 08:05:38.055137 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wxj6w" podStartSLOduration=2.579779899 podStartE2EDuration="5.055119465s" podCreationTimestamp="2025-12-06 08:05:33 +0000 UTC" firstStartedPulling="2025-12-06 08:05:34.982033249 +0000 UTC m=+4097.383422129" lastFinishedPulling="2025-12-06 08:05:37.457372785 +0000 UTC m=+4099.858761695" observedRunningTime="2025-12-06 08:05:38.044338355 +0000 UTC m=+4100.445727245" watchObservedRunningTime="2025-12-06 08:05:38.055119465 +0000 UTC m=+4100.456508335" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.075511 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bh2j9"] Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.078760 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.117421 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh2j9"] Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.183942 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4rp\" (UniqueName: \"kubernetes.io/projected/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-kube-api-access-lj4rp\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.184049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-utilities\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.184084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-catalog-content\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.285203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4rp\" (UniqueName: \"kubernetes.io/projected/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-kube-api-access-lj4rp\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.285541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-utilities\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.285715 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-catalog-content\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.286177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-utilities\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.286226 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-catalog-content\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.304960 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4rp\" (UniqueName: \"kubernetes.io/projected/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-kube-api-access-lj4rp\") pod \"community-operators-bh2j9\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.425415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:41 crc kubenswrapper[4895]: I1206 08:05:41.960550 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh2j9"] Dec 06 08:05:41 crc kubenswrapper[4895]: W1206 08:05:41.977170 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b654ca9_dd7f_4f7e_ad49_f0557d949f4a.slice/crio-0f65098be301450800b94a4c17782d3942245aaff13fe289edc531b0cc353099 WatchSource:0}: Error finding container 0f65098be301450800b94a4c17782d3942245aaff13fe289edc531b0cc353099: Status 404 returned error can't find the container with id 0f65098be301450800b94a4c17782d3942245aaff13fe289edc531b0cc353099 Dec 06 08:05:42 crc kubenswrapper[4895]: I1206 08:05:42.048163 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh2j9" event={"ID":"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a","Type":"ContainerStarted","Data":"0f65098be301450800b94a4c17782d3942245aaff13fe289edc531b0cc353099"} Dec 06 08:05:43 crc kubenswrapper[4895]: I1206 08:05:43.060086 4895 generic.go:334] "Generic (PLEG): container finished" podID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerID="4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f" exitCode=0 Dec 06 08:05:43 crc kubenswrapper[4895]: I1206 08:05:43.060145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh2j9" event={"ID":"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a","Type":"ContainerDied","Data":"4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f"} Dec 06 08:05:43 crc kubenswrapper[4895]: I1206 08:05:43.922062 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:43 crc kubenswrapper[4895]: I1206 08:05:43.922557 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:44 crc kubenswrapper[4895]: I1206 08:05:44.008885 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:44 crc kubenswrapper[4895]: I1206 08:05:44.134163 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:45 crc kubenswrapper[4895]: I1206 08:05:45.076935 4895 generic.go:334] "Generic (PLEG): container finished" podID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerID="1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29" exitCode=0 Dec 06 08:05:45 crc kubenswrapper[4895]: I1206 08:05:45.077038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh2j9" event={"ID":"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a","Type":"ContainerDied","Data":"1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29"} Dec 06 08:05:45 crc kubenswrapper[4895]: I1206 08:05:45.172564 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wxj6w"] Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.084345 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wxj6w" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="registry-server" containerID="cri-o://b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4" gracePeriod=2 Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.646420 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.774564 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-catalog-content\") pod \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.774612 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-utilities\") pod \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.774666 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsdpp\" (UniqueName: \"kubernetes.io/projected/b29c8815-5e96-4ffd-a96e-5b390c8190ec-kube-api-access-xsdpp\") pod \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\" (UID: \"b29c8815-5e96-4ffd-a96e-5b390c8190ec\") " Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.775970 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-utilities" (OuterVolumeSpecName: "utilities") pod "b29c8815-5e96-4ffd-a96e-5b390c8190ec" (UID: "b29c8815-5e96-4ffd-a96e-5b390c8190ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.789107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29c8815-5e96-4ffd-a96e-5b390c8190ec-kube-api-access-xsdpp" (OuterVolumeSpecName: "kube-api-access-xsdpp") pod "b29c8815-5e96-4ffd-a96e-5b390c8190ec" (UID: "b29c8815-5e96-4ffd-a96e-5b390c8190ec"). InnerVolumeSpecName "kube-api-access-xsdpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.810356 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b29c8815-5e96-4ffd-a96e-5b390c8190ec" (UID: "b29c8815-5e96-4ffd-a96e-5b390c8190ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.876820 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.876864 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c8815-5e96-4ffd-a96e-5b390c8190ec-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:46 crc kubenswrapper[4895]: I1206 08:05:46.876880 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsdpp\" (UniqueName: \"kubernetes.io/projected/b29c8815-5e96-4ffd-a96e-5b390c8190ec-kube-api-access-xsdpp\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.091396 4895 generic.go:334] "Generic (PLEG): container finished" podID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerID="b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4" exitCode=0 Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.091541 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wxj6w" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.091595 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerDied","Data":"b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4"} Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.091640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wxj6w" event={"ID":"b29c8815-5e96-4ffd-a96e-5b390c8190ec","Type":"ContainerDied","Data":"7d7885c29b09774d6fa57a0d935b577082d5711ffe164607619cfe59e87a7e2c"} Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.091662 4895 scope.go:117] "RemoveContainer" containerID="b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.093295 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh2j9" event={"ID":"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a","Type":"ContainerStarted","Data":"6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd"} Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.110438 4895 scope.go:117] "RemoveContainer" containerID="da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.126463 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bh2j9" podStartSLOduration=2.712820293 podStartE2EDuration="6.126447939s" podCreationTimestamp="2025-12-06 08:05:41 +0000 UTC" firstStartedPulling="2025-12-06 08:05:43.0617297 +0000 UTC m=+4105.463118610" lastFinishedPulling="2025-12-06 08:05:46.475357376 +0000 UTC m=+4108.876746256" observedRunningTime="2025-12-06 08:05:47.123301515 +0000 UTC m=+4109.524690385" watchObservedRunningTime="2025-12-06 08:05:47.126447939 +0000 UTC m=+4109.527836799" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.134054 4895 scope.go:117] "RemoveContainer" containerID="a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.145105 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wxj6w"] Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.150286 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wxj6w"] Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.167455 4895 scope.go:117] "RemoveContainer" containerID="b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4" Dec 06 08:05:47 crc kubenswrapper[4895]: E1206 08:05:47.167893 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4\": container with ID starting with b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4 not found: ID does not exist" containerID="b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.167926 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4"} err="failed to get container status \"b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4\": rpc error: code = NotFound desc = could not find container \"b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4\": container with ID starting with b4dcc28473cd80f0ac4a41833c8e2d7be300980ad690c3bb999ed9246285a4b4 not found: ID does not exist" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.167945 4895 scope.go:117] "RemoveContainer" containerID="da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9" Dec 06 08:05:47 crc kubenswrapper[4895]: E1206 08:05:47.168345 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9\": container with ID starting with da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9 not found: ID does not exist" containerID="da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.168366 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9"} err="failed to get container status \"da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9\": rpc error: code = NotFound desc = could not find container \"da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9\": container with ID starting with da0074098d73d421ec6589144653e3a2024daeab074d991aeda1696d65c176b9 not found: ID does not exist" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.168378 4895 scope.go:117] "RemoveContainer" containerID="a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8" Dec 06 08:05:47 crc kubenswrapper[4895]: E1206 08:05:47.168595 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8\": container with ID starting with a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8 not found: ID does not exist" containerID="a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8" Dec 06 08:05:47 crc kubenswrapper[4895]: I1206 08:05:47.168618 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8"} err="failed to get container status \"a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8\": rpc error: code = NotFound desc = could not find container \"a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8\": container with ID starting with a3c64fd6d1f6ae73dcc571757fbfe37a011938dcd1e3cb8fdeeda428dd6e0ae8 not found: ID does not exist" Dec 06 08:05:48 crc kubenswrapper[4895]: I1206 08:05:48.061806 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" path="/var/lib/kubelet/pods/b29c8815-5e96-4ffd-a96e-5b390c8190ec/volumes" Dec 06 08:05:51 crc kubenswrapper[4895]: I1206 08:05:51.425945 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:51 crc kubenswrapper[4895]: I1206 08:05:51.426407 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:51 crc kubenswrapper[4895]: I1206 08:05:51.474363 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:52 crc kubenswrapper[4895]: I1206 08:05:52.201332 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:52 crc kubenswrapper[4895]: I1206 08:05:52.264976 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh2j9"] Dec 06 08:05:54 crc kubenswrapper[4895]: I1206 08:05:54.158046 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bh2j9" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="registry-server" containerID="cri-o://6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd" gracePeriod=2 Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.658077 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.809643 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-utilities\") pod \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.809714 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-catalog-content\") pod \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.809755 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj4rp\" (UniqueName: \"kubernetes.io/projected/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-kube-api-access-lj4rp\") pod \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\" (UID: \"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a\") " Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.812803 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-utilities" (OuterVolumeSpecName: "utilities") pod "3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" (UID: "3b654ca9-dd7f-4f7e-ad49-f0557d949f4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.819974 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-kube-api-access-lj4rp" (OuterVolumeSpecName: "kube-api-access-lj4rp") pod "3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" (UID: "3b654ca9-dd7f-4f7e-ad49-f0557d949f4a"). InnerVolumeSpecName "kube-api-access-lj4rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.899510 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" (UID: "3b654ca9-dd7f-4f7e-ad49-f0557d949f4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.911106 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.911149 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:55 crc kubenswrapper[4895]: I1206 08:05:55.911172 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj4rp\" (UniqueName: \"kubernetes.io/projected/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a-kube-api-access-lj4rp\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.190507 4895 generic.go:334] "Generic (PLEG): container finished" podID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerID="6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd" exitCode=0 Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.190578 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh2j9" event={"ID":"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a","Type":"ContainerDied","Data":"6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd"} Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.190585 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh2j9" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.190633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh2j9" event={"ID":"3b654ca9-dd7f-4f7e-ad49-f0557d949f4a","Type":"ContainerDied","Data":"0f65098be301450800b94a4c17782d3942245aaff13fe289edc531b0cc353099"} Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.190671 4895 scope.go:117] "RemoveContainer" containerID="6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.229883 4895 scope.go:117] "RemoveContainer" containerID="1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.231327 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh2j9"] Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.259130 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bh2j9"] Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.405115 4895 scope.go:117] "RemoveContainer" containerID="4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.554652 4895 scope.go:117] "RemoveContainer" containerID="6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd" Dec 06 08:05:56 crc kubenswrapper[4895]: E1206 08:05:56.555210 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd\": container with ID starting with 6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd not found: ID does not exist" containerID="6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.555269 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd"} err="failed to get container status \"6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd\": rpc error: code = NotFound desc = could not find container \"6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd\": container with ID starting with 6cdf7de1ac84ef26655d89fe8fd609e4b333dfec06b0f9640e1cc3b62632c6bd not found: ID does not exist" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.555295 4895 scope.go:117] "RemoveContainer" containerID="1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29" Dec 06 08:05:56 crc kubenswrapper[4895]: E1206 08:05:56.555764 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29\": container with ID starting with 1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29 not found: ID does not exist" containerID="1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.555806 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29"} err="failed to get container status \"1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29\": rpc error: code = NotFound desc = could not find container \"1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29\": container with ID starting with 1959ce478c5229b836951d467e0f3177302e18cd2bc0939bcf576b850b653b29 not found: ID does not exist" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.555826 4895 scope.go:117] "RemoveContainer" containerID="4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f" Dec 06 08:05:56 crc kubenswrapper[4895]: E1206 08:05:56.556197 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f\": container with ID starting with 4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f not found: ID does not exist" containerID="4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f" Dec 06 08:05:56 crc kubenswrapper[4895]: I1206 08:05:56.556240 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f"} err="failed to get container status \"4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f\": rpc error: code = NotFound desc = could not find container \"4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f\": container with ID starting with 4a1772ed567964bed501a2823a031daa4370522fb796cc7e0c706b6f9e9d4b6f not found: ID does not exist" Dec 06 08:05:58 crc kubenswrapper[4895]: I1206 08:05:58.061438 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" path="/var/lib/kubelet/pods/3b654ca9-dd7f-4f7e-ad49-f0557d949f4a/volumes" Dec 06 08:06:29 crc kubenswrapper[4895]: I1206 08:06:29.696086 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:06:29 crc kubenswrapper[4895]: I1206 08:06:29.696825 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:06:59 crc kubenswrapper[4895]: I1206 08:06:59.696173 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:06:59 crc kubenswrapper[4895]: I1206 08:06:59.696718 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:07:29 crc kubenswrapper[4895]: I1206 08:07:29.695635 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:07:29 crc kubenswrapper[4895]: I1206 08:07:29.696143 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:07:29 crc kubenswrapper[4895]: I1206 08:07:29.696204 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:07:29 crc kubenswrapper[4895]: I1206 08:07:29.696942 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f14b3cb57afdbf4c27c610c9a22f39fe6c0cf5629db0edb6a3f0f0b88bcc13bf"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:07:29 crc kubenswrapper[4895]: I1206 08:07:29.697008 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://f14b3cb57afdbf4c27c610c9a22f39fe6c0cf5629db0edb6a3f0f0b88bcc13bf" gracePeriod=600 Dec 06 08:07:30 crc kubenswrapper[4895]: I1206 08:07:30.070458 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="f14b3cb57afdbf4c27c610c9a22f39fe6c0cf5629db0edb6a3f0f0b88bcc13bf" exitCode=0 Dec 06 08:07:30 crc kubenswrapper[4895]: I1206 08:07:30.070507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"f14b3cb57afdbf4c27c610c9a22f39fe6c0cf5629db0edb6a3f0f0b88bcc13bf"} Dec 06 08:07:30 crc kubenswrapper[4895]: I1206 08:07:30.070580 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de"} Dec 06 08:07:30 crc kubenswrapper[4895]: I1206 08:07:30.070617 4895 scope.go:117] "RemoveContainer" containerID="9a889b3065100114198517003b8a7093e50585441014aa000ba6a120f4da370d" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.932063 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmd7j"] Dec 06 08:08:48 crc kubenswrapper[4895]: E1206 08:08:48.933255 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="extract-utilities" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="extract-utilities" Dec 06 08:08:48 crc kubenswrapper[4895]: E1206 08:08:48.933300 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="extract-content" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933312 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="extract-content" Dec 06 08:08:48 crc kubenswrapper[4895]: E1206 08:08:48.933337 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="extract-content" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933349 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="extract-content" Dec 06 08:08:48 crc kubenswrapper[4895]: E1206 08:08:48.933374 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="extract-utilities" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933386 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="extract-utilities" Dec 06 08:08:48 crc kubenswrapper[4895]: E1206 08:08:48.933418 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="registry-server" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933430 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="registry-server" Dec 06 08:08:48 crc kubenswrapper[4895]: E1206 08:08:48.933460 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="registry-server" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933499 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="registry-server" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933773 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29c8815-5e96-4ffd-a96e-5b390c8190ec" containerName="registry-server" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.933802 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b654ca9-dd7f-4f7e-ad49-f0557d949f4a" containerName="registry-server" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.935800 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:48 crc kubenswrapper[4895]: I1206 08:08:48.951432 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmd7j"] Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.022190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-catalog-content\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.022567 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-utilities\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.022794 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n949\" (UniqueName: \"kubernetes.io/projected/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-kube-api-access-2n949\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.124665 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-catalog-content\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.125003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-utilities\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.125254 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n949\" (UniqueName: \"kubernetes.io/projected/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-kube-api-access-2n949\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.125570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-catalog-content\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.126516 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-utilities\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.153205 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n949\" (UniqueName: \"kubernetes.io/projected/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-kube-api-access-2n949\") pod \"redhat-operators-vmd7j\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.266161 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.690113 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmd7j"] Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.873807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerStarted","Data":"b085d424db937593b597fad49b3d72e8c7b749628707b8ab41e95ec9dd7960f8"} Dec 06 08:08:49 crc kubenswrapper[4895]: I1206 08:08:49.874099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerStarted","Data":"754da99c31997d704609dc69f54c64e77ef62302011066ea7ed9b9b479e10faf"} Dec 06 08:08:50 crc kubenswrapper[4895]: I1206 08:08:50.885618 4895 generic.go:334] "Generic (PLEG): container finished" podID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerID="b085d424db937593b597fad49b3d72e8c7b749628707b8ab41e95ec9dd7960f8" exitCode=0 Dec 06 08:08:50 crc kubenswrapper[4895]: I1206 08:08:50.885716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerDied","Data":"b085d424db937593b597fad49b3d72e8c7b749628707b8ab41e95ec9dd7960f8"} Dec 06 08:08:50 crc kubenswrapper[4895]: I1206 08:08:50.886027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerStarted","Data":"cc894e4d47bac97d42c237f2f7bb25035cf2f57ff99029a9da05a490ceef2557"} Dec 06 08:08:51 crc kubenswrapper[4895]: I1206 08:08:51.897409 4895 generic.go:334] "Generic (PLEG): container finished" podID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerID="cc894e4d47bac97d42c237f2f7bb25035cf2f57ff99029a9da05a490ceef2557" exitCode=0 Dec 06 08:08:51 crc kubenswrapper[4895]: I1206 08:08:51.897459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerDied","Data":"cc894e4d47bac97d42c237f2f7bb25035cf2f57ff99029a9da05a490ceef2557"} Dec 06 08:08:52 crc kubenswrapper[4895]: I1206 08:08:52.908445 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerStarted","Data":"a136dd0ec24bd4d0dafb533732d14a3328739128edd4ff1c30a21cb7ee7f7fa1"} Dec 06 08:08:52 crc kubenswrapper[4895]: I1206 08:08:52.944506 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmd7j" podStartSLOduration=2.50392978 podStartE2EDuration="4.94442229s" podCreationTimestamp="2025-12-06 08:08:48 +0000 UTC" firstStartedPulling="2025-12-06 08:08:49.875372622 +0000 UTC m=+4292.276761482" lastFinishedPulling="2025-12-06 08:08:52.315865122 +0000 UTC m=+4294.717253992" observedRunningTime="2025-12-06 08:08:52.935022818 +0000 UTC m=+4295.336411748" watchObservedRunningTime="2025-12-06 08:08:52.94442229 +0000 UTC m=+4295.345811190" Dec 06 08:08:59 crc kubenswrapper[4895]: I1206 08:08:59.266530 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:59 crc kubenswrapper[4895]: I1206 08:08:59.267176 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:08:59 crc kubenswrapper[4895]: I1206 08:08:59.325182 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:09:00 crc kubenswrapper[4895]: I1206 08:09:00.041669 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:09:00 crc kubenswrapper[4895]: I1206 08:09:00.104977 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmd7j"] Dec 06 08:09:01 crc kubenswrapper[4895]: I1206 08:09:01.994068 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmd7j" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="registry-server" containerID="cri-o://a136dd0ec24bd4d0dafb533732d14a3328739128edd4ff1c30a21cb7ee7f7fa1" gracePeriod=2 Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.030812 4895 generic.go:334] "Generic (PLEG): container finished" podID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerID="a136dd0ec24bd4d0dafb533732d14a3328739128edd4ff1c30a21cb7ee7f7fa1" exitCode=0 Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.031024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerDied","Data":"a136dd0ec24bd4d0dafb533732d14a3328739128edd4ff1c30a21cb7ee7f7fa1"} Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.251364 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.415114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-utilities\") pod \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.415321 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n949\" (UniqueName: \"kubernetes.io/projected/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-kube-api-access-2n949\") pod \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.415389 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-catalog-content\") pod \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\" (UID: \"3f3b2cc2-5360-41eb-ad8f-40789bb53d23\") " Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.416100 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-utilities" (OuterVolumeSpecName: "utilities") pod "3f3b2cc2-5360-41eb-ad8f-40789bb53d23" (UID: "3f3b2cc2-5360-41eb-ad8f-40789bb53d23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.421633 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-kube-api-access-2n949" (OuterVolumeSpecName: "kube-api-access-2n949") pod "3f3b2cc2-5360-41eb-ad8f-40789bb53d23" (UID: "3f3b2cc2-5360-41eb-ad8f-40789bb53d23"). InnerVolumeSpecName "kube-api-access-2n949". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.517099 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.517706 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n949\" (UniqueName: \"kubernetes.io/projected/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-kube-api-access-2n949\") on node \"crc\" DevicePath \"\"" Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.524238 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f3b2cc2-5360-41eb-ad8f-40789bb53d23" (UID: "3f3b2cc2-5360-41eb-ad8f-40789bb53d23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:09:05 crc kubenswrapper[4895]: I1206 08:09:05.618907 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f3b2cc2-5360-41eb-ad8f-40789bb53d23-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.039498 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmd7j" event={"ID":"3f3b2cc2-5360-41eb-ad8f-40789bb53d23","Type":"ContainerDied","Data":"754da99c31997d704609dc69f54c64e77ef62302011066ea7ed9b9b479e10faf"} Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.039542 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmd7j" Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.039559 4895 scope.go:117] "RemoveContainer" containerID="a136dd0ec24bd4d0dafb533732d14a3328739128edd4ff1c30a21cb7ee7f7fa1" Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.061943 4895 scope.go:117] "RemoveContainer" containerID="cc894e4d47bac97d42c237f2f7bb25035cf2f57ff99029a9da05a490ceef2557" Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.074362 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmd7j"] Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.103989 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmd7j"] Dec 06 08:09:06 crc kubenswrapper[4895]: I1206 08:09:06.120943 4895 scope.go:117] "RemoveContainer" containerID="b085d424db937593b597fad49b3d72e8c7b749628707b8ab41e95ec9dd7960f8" Dec 06 08:09:08 crc kubenswrapper[4895]: I1206 08:09:08.059683 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" path="/var/lib/kubelet/pods/3f3b2cc2-5360-41eb-ad8f-40789bb53d23/volumes" Dec 06 08:09:59 crc kubenswrapper[4895]: I1206 08:09:59.696070 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:09:59 crc kubenswrapper[4895]: I1206 08:09:59.696644 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:10:29 crc kubenswrapper[4895]: I1206 08:10:29.695754 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:10:29 crc kubenswrapper[4895]: I1206 08:10:29.696548 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:10:59 crc kubenswrapper[4895]: I1206 08:10:59.695908 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:10:59 crc kubenswrapper[4895]: I1206 08:10:59.696664 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:10:59 crc kubenswrapper[4895]: I1206 08:10:59.696732 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:10:59 crc kubenswrapper[4895]: I1206 08:10:59.697729 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:10:59 crc kubenswrapper[4895]: I1206 08:10:59.697822 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" gracePeriod=600 Dec 06 08:10:59 crc kubenswrapper[4895]: E1206 08:10:59.823468 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:11:00 crc kubenswrapper[4895]: I1206 08:11:00.080025 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" exitCode=0 Dec 06 08:11:00 crc kubenswrapper[4895]: I1206 08:11:00.080097 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de"} Dec 06 08:11:00 crc kubenswrapper[4895]: I1206 08:11:00.080158 4895 scope.go:117] "RemoveContainer" containerID="f14b3cb57afdbf4c27c610c9a22f39fe6c0cf5629db0edb6a3f0f0b88bcc13bf" Dec 06 08:11:00 crc kubenswrapper[4895]: I1206 08:11:00.080819 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:11:00 crc kubenswrapper[4895]: E1206 08:11:00.081149 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:11:15 crc kubenswrapper[4895]: I1206 08:11:15.050961 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:11:15 crc kubenswrapper[4895]: E1206 08:11:15.051933 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:11:26 crc kubenswrapper[4895]: I1206 08:11:26.050719 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:11:26 crc kubenswrapper[4895]: E1206 08:11:26.051404 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:11:41 crc kubenswrapper[4895]: I1206 08:11:41.051269 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:11:41 crc kubenswrapper[4895]: E1206 08:11:41.051987 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:11:55 crc kubenswrapper[4895]: I1206 08:11:55.051438 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:11:55 crc kubenswrapper[4895]: E1206 08:11:55.054098 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:12:09 crc kubenswrapper[4895]: I1206 08:12:09.051327 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:12:09 crc kubenswrapper[4895]: E1206 08:12:09.052532 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:12:21 crc kubenswrapper[4895]: I1206 08:12:21.050656 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:12:21 crc kubenswrapper[4895]: E1206 08:12:21.051688 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:12:32 crc kubenswrapper[4895]: I1206 08:12:32.051075 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:12:32 crc kubenswrapper[4895]: E1206 08:12:32.051909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.050015 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:12:43 crc kubenswrapper[4895]: E1206 08:12:43.050708 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.756107 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9dmk"] Dec 06 08:12:43 crc kubenswrapper[4895]: E1206 08:12:43.756454 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="registry-server" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.756486 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="registry-server" Dec 06 08:12:43 crc kubenswrapper[4895]: E1206 08:12:43.756504 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="extract-utilities" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.756510 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="extract-utilities" Dec 06 08:12:43 crc kubenswrapper[4895]: E1206 08:12:43.756526 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="extract-content" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.756532 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="extract-content" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.756662 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3b2cc2-5360-41eb-ad8f-40789bb53d23" containerName="registry-server" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.757686 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.769254 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9dmk"] Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.816122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzmz\" (UniqueName: \"kubernetes.io/projected/441907ae-e969-4f34-9956-481090900d69-kube-api-access-sgzmz\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.816334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-utilities\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.816649 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-catalog-content\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.918415 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzmz\" (UniqueName: \"kubernetes.io/projected/441907ae-e969-4f34-9956-481090900d69-kube-api-access-sgzmz\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.918548 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-utilities\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.918601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-catalog-content\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.919231 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-utilities\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.919348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-catalog-content\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:43 crc kubenswrapper[4895]: I1206 08:12:43.938895 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzmz\" (UniqueName: \"kubernetes.io/projected/441907ae-e969-4f34-9956-481090900d69-kube-api-access-sgzmz\") pod \"certified-operators-l9dmk\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:44 crc kubenswrapper[4895]: I1206 08:12:44.087697 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:44 crc kubenswrapper[4895]: I1206 08:12:44.406414 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9dmk"] Dec 06 08:12:45 crc kubenswrapper[4895]: I1206 08:12:45.381195 4895 generic.go:334] "Generic (PLEG): container finished" podID="441907ae-e969-4f34-9956-481090900d69" containerID="d7b28d20b29cb80ec1c634dc83c236a30da47887d207654e2065f196e0352ddc" exitCode=0 Dec 06 08:12:45 crc kubenswrapper[4895]: I1206 08:12:45.381467 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dmk" event={"ID":"441907ae-e969-4f34-9956-481090900d69","Type":"ContainerDied","Data":"d7b28d20b29cb80ec1c634dc83c236a30da47887d207654e2065f196e0352ddc"} Dec 06 08:12:45 crc kubenswrapper[4895]: I1206 08:12:45.381744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dmk" event={"ID":"441907ae-e969-4f34-9956-481090900d69","Type":"ContainerStarted","Data":"93743c6a2fdfc0acdf3fea4353902a44717dc61a08dcd389ed157f3b17a14d63"} Dec 06 08:12:45 crc kubenswrapper[4895]: I1206 08:12:45.383734 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:12:49 crc kubenswrapper[4895]: I1206 08:12:49.421393 4895 generic.go:334] "Generic (PLEG): container finished" podID="441907ae-e969-4f34-9956-481090900d69" containerID="05af2d8b2355a264a5c8a62f7769245b141989925a4cd1538aaf26438a023b17" exitCode=0 Dec 06 08:12:49 crc kubenswrapper[4895]: I1206 08:12:49.421488 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dmk" event={"ID":"441907ae-e969-4f34-9956-481090900d69","Type":"ContainerDied","Data":"05af2d8b2355a264a5c8a62f7769245b141989925a4cd1538aaf26438a023b17"} Dec 06 08:12:52 crc kubenswrapper[4895]: I1206 08:12:52.453167 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dmk" event={"ID":"441907ae-e969-4f34-9956-481090900d69","Type":"ContainerStarted","Data":"c73de4f90cb6739083878cc963c9d7492ea919cb552ac13abf999333b36fec33"} Dec 06 08:12:52 crc kubenswrapper[4895]: I1206 08:12:52.471553 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9dmk" podStartSLOduration=2.6755331509999998 podStartE2EDuration="9.471533142s" podCreationTimestamp="2025-12-06 08:12:43 +0000 UTC" firstStartedPulling="2025-12-06 08:12:45.383353687 +0000 UTC m=+4527.784742557" lastFinishedPulling="2025-12-06 08:12:52.179353688 +0000 UTC m=+4534.580742548" observedRunningTime="2025-12-06 08:12:52.468349037 +0000 UTC m=+4534.869737917" watchObservedRunningTime="2025-12-06 08:12:52.471533142 +0000 UTC m=+4534.872922012" Dec 06 08:12:54 crc kubenswrapper[4895]: I1206 08:12:54.089525 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:54 crc kubenswrapper[4895]: I1206 08:12:54.089567 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:54 crc kubenswrapper[4895]: I1206 08:12:54.424026 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:12:56 crc kubenswrapper[4895]: I1206 08:12:56.050849 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:12:56 crc kubenswrapper[4895]: E1206 08:12:56.051410 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:13:04 crc kubenswrapper[4895]: I1206 08:13:04.131585 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:13:08 crc kubenswrapper[4895]: I1206 08:13:08.055543 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:13:08 crc kubenswrapper[4895]: E1206 08:13:08.057286 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:13:08 crc kubenswrapper[4895]: I1206 08:13:08.159822 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9dmk"] Dec 06 08:13:08 crc kubenswrapper[4895]: I1206 08:13:08.160131 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9dmk" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="registry-server" containerID="cri-o://c73de4f90cb6739083878cc963c9d7492ea919cb552ac13abf999333b36fec33" gracePeriod=2 Dec 06 08:13:08 crc kubenswrapper[4895]: I1206 08:13:08.592145 4895 generic.go:334] "Generic (PLEG): container finished" podID="441907ae-e969-4f34-9956-481090900d69" containerID="c73de4f90cb6739083878cc963c9d7492ea919cb552ac13abf999333b36fec33" exitCode=0 Dec 06 08:13:08 crc kubenswrapper[4895]: I1206 08:13:08.592201 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dmk" event={"ID":"441907ae-e969-4f34-9956-481090900d69","Type":"ContainerDied","Data":"c73de4f90cb6739083878cc963c9d7492ea919cb552ac13abf999333b36fec33"} Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.124393 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.300043 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-utilities\") pod \"441907ae-e969-4f34-9956-481090900d69\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.300170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgzmz\" (UniqueName: \"kubernetes.io/projected/441907ae-e969-4f34-9956-481090900d69-kube-api-access-sgzmz\") pod \"441907ae-e969-4f34-9956-481090900d69\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.300202 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-catalog-content\") pod \"441907ae-e969-4f34-9956-481090900d69\" (UID: \"441907ae-e969-4f34-9956-481090900d69\") " Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.301583 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-utilities" (OuterVolumeSpecName: "utilities") pod "441907ae-e969-4f34-9956-481090900d69" (UID: "441907ae-e969-4f34-9956-481090900d69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.301870 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.305916 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441907ae-e969-4f34-9956-481090900d69-kube-api-access-sgzmz" (OuterVolumeSpecName: "kube-api-access-sgzmz") pod "441907ae-e969-4f34-9956-481090900d69" (UID: "441907ae-e969-4f34-9956-481090900d69"). InnerVolumeSpecName "kube-api-access-sgzmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.370551 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "441907ae-e969-4f34-9956-481090900d69" (UID: "441907ae-e969-4f34-9956-481090900d69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.402443 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgzmz\" (UniqueName: \"kubernetes.io/projected/441907ae-e969-4f34-9956-481090900d69-kube-api-access-sgzmz\") on node \"crc\" DevicePath \"\"" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.402503 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/441907ae-e969-4f34-9956-481090900d69-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.604280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dmk" event={"ID":"441907ae-e969-4f34-9956-481090900d69","Type":"ContainerDied","Data":"93743c6a2fdfc0acdf3fea4353902a44717dc61a08dcd389ed157f3b17a14d63"} Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.604342 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dmk" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.604351 4895 scope.go:117] "RemoveContainer" containerID="c73de4f90cb6739083878cc963c9d7492ea919cb552ac13abf999333b36fec33" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.633348 4895 scope.go:117] "RemoveContainer" containerID="05af2d8b2355a264a5c8a62f7769245b141989925a4cd1538aaf26438a023b17" Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.638019 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9dmk"] Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.647283 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9dmk"] Dec 06 08:13:09 crc kubenswrapper[4895]: I1206 08:13:09.674138 4895 scope.go:117] "RemoveContainer" containerID="d7b28d20b29cb80ec1c634dc83c236a30da47887d207654e2065f196e0352ddc" Dec 06 08:13:10 crc kubenswrapper[4895]: I1206 08:13:10.059236 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441907ae-e969-4f34-9956-481090900d69" path="/var/lib/kubelet/pods/441907ae-e969-4f34-9956-481090900d69/volumes" Dec 06 08:13:22 crc kubenswrapper[4895]: I1206 08:13:22.050300 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:13:22 crc kubenswrapper[4895]: E1206 08:13:22.051262 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:13:37 crc kubenswrapper[4895]: I1206 08:13:37.051046 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:13:37 crc kubenswrapper[4895]: E1206 08:13:37.051730 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:13:49 crc kubenswrapper[4895]: I1206 08:13:49.050698 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:13:49 crc kubenswrapper[4895]: E1206 08:13:49.051285 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:14:04 crc kubenswrapper[4895]: I1206 08:14:04.051212 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:14:04 crc kubenswrapper[4895]: E1206 08:14:04.052176 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:14:18 crc kubenswrapper[4895]: I1206 08:14:18.054268 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:14:18 crc kubenswrapper[4895]: E1206 08:14:18.055097 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:14:30 crc kubenswrapper[4895]: I1206 08:14:30.051225 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:14:30 crc kubenswrapper[4895]: E1206 08:14:30.051998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:14:45 crc kubenswrapper[4895]: I1206 08:14:45.050853 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:14:45 crc kubenswrapper[4895]: E1206 08:14:45.051764 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:14:59 crc kubenswrapper[4895]: I1206 08:14:59.051242 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:14:59 crc kubenswrapper[4895]: E1206 08:14:59.052227 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.178804 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8"] Dec 06 08:15:00 crc kubenswrapper[4895]: E1206 08:15:00.179935 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="extract-utilities" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.180047 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="extract-utilities" Dec 06 08:15:00 crc kubenswrapper[4895]: E1206 08:15:00.180137 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="extract-content" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.180219 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="extract-content" Dec 06 08:15:00 crc kubenswrapper[4895]: E1206 08:15:00.180331 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="registry-server" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.180414 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="registry-server" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.180794 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="441907ae-e969-4f34-9956-481090900d69" containerName="registry-server" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.181623 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.184149 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.184170 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.197186 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8"] Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.299343 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a9e240-d36a-40a1-8e3e-4995653a3015-config-volume\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.299435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71a9e240-d36a-40a1-8e3e-4995653a3015-secret-volume\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.299655 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6twt\" (UniqueName: \"kubernetes.io/projected/71a9e240-d36a-40a1-8e3e-4995653a3015-kube-api-access-z6twt\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.401390 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71a9e240-d36a-40a1-8e3e-4995653a3015-secret-volume\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.401537 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6twt\" (UniqueName: \"kubernetes.io/projected/71a9e240-d36a-40a1-8e3e-4995653a3015-kube-api-access-z6twt\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.401640 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a9e240-d36a-40a1-8e3e-4995653a3015-config-volume\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.402937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a9e240-d36a-40a1-8e3e-4995653a3015-config-volume\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.410175 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71a9e240-d36a-40a1-8e3e-4995653a3015-secret-volume\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.432546 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6twt\" (UniqueName: \"kubernetes.io/projected/71a9e240-d36a-40a1-8e3e-4995653a3015-kube-api-access-z6twt\") pod \"collect-profiles-29416815-z46r8\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.505189 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:00 crc kubenswrapper[4895]: I1206 08:15:00.972843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8"] Dec 06 08:15:01 crc kubenswrapper[4895]: I1206 08:15:01.521668 4895 generic.go:334] "Generic (PLEG): container finished" podID="71a9e240-d36a-40a1-8e3e-4995653a3015" containerID="0faaa7ac2cde97f4ef154f67f4b0120e572cffa407a4a2b061f9cbe70166db76" exitCode=0 Dec 06 08:15:01 crc kubenswrapper[4895]: I1206 08:15:01.521715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" event={"ID":"71a9e240-d36a-40a1-8e3e-4995653a3015","Type":"ContainerDied","Data":"0faaa7ac2cde97f4ef154f67f4b0120e572cffa407a4a2b061f9cbe70166db76"} Dec 06 08:15:01 crc kubenswrapper[4895]: I1206 08:15:01.522010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" event={"ID":"71a9e240-d36a-40a1-8e3e-4995653a3015","Type":"ContainerStarted","Data":"c956c04dc1876ce4d9e4fa736e06d032555b002fd67352420e01cc3a2cf237a6"} Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.843055 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.937825 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a9e240-d36a-40a1-8e3e-4995653a3015-config-volume\") pod \"71a9e240-d36a-40a1-8e3e-4995653a3015\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.938164 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6twt\" (UniqueName: \"kubernetes.io/projected/71a9e240-d36a-40a1-8e3e-4995653a3015-kube-api-access-z6twt\") pod \"71a9e240-d36a-40a1-8e3e-4995653a3015\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.938284 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71a9e240-d36a-40a1-8e3e-4995653a3015-secret-volume\") pod \"71a9e240-d36a-40a1-8e3e-4995653a3015\" (UID: \"71a9e240-d36a-40a1-8e3e-4995653a3015\") " Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.938813 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a9e240-d36a-40a1-8e3e-4995653a3015-config-volume" (OuterVolumeSpecName: "config-volume") pod "71a9e240-d36a-40a1-8e3e-4995653a3015" (UID: "71a9e240-d36a-40a1-8e3e-4995653a3015"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.943920 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a9e240-d36a-40a1-8e3e-4995653a3015-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "71a9e240-d36a-40a1-8e3e-4995653a3015" (UID: "71a9e240-d36a-40a1-8e3e-4995653a3015"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:15:02 crc kubenswrapper[4895]: I1206 08:15:02.944196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a9e240-d36a-40a1-8e3e-4995653a3015-kube-api-access-z6twt" (OuterVolumeSpecName: "kube-api-access-z6twt") pod "71a9e240-d36a-40a1-8e3e-4995653a3015" (UID: "71a9e240-d36a-40a1-8e3e-4995653a3015"). InnerVolumeSpecName "kube-api-access-z6twt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.041012 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71a9e240-d36a-40a1-8e3e-4995653a3015-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.041079 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a9e240-d36a-40a1-8e3e-4995653a3015-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.041098 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6twt\" (UniqueName: \"kubernetes.io/projected/71a9e240-d36a-40a1-8e3e-4995653a3015-kube-api-access-z6twt\") on node \"crc\" DevicePath \"\"" Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.539777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" event={"ID":"71a9e240-d36a-40a1-8e3e-4995653a3015","Type":"ContainerDied","Data":"c956c04dc1876ce4d9e4fa736e06d032555b002fd67352420e01cc3a2cf237a6"} Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.539844 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c956c04dc1876ce4d9e4fa736e06d032555b002fd67352420e01cc3a2cf237a6" Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.539932 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8" Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.915726 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt"] Dec 06 08:15:03 crc kubenswrapper[4895]: I1206 08:15:03.923613 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-nfczt"] Dec 06 08:15:04 crc kubenswrapper[4895]: I1206 08:15:04.058329 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7434d222-3dd8-455f-aff3-69f452f63fee" path="/var/lib/kubelet/pods/7434d222-3dd8-455f-aff3-69f452f63fee/volumes" Dec 06 08:15:13 crc kubenswrapper[4895]: I1206 08:15:13.051115 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:15:13 crc kubenswrapper[4895]: E1206 08:15:13.052246 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:15:13 crc kubenswrapper[4895]: I1206 08:15:13.118819 4895 scope.go:117] "RemoveContainer" containerID="3bec6b28dcf046d898dfded872eb984e8ca3ccfa7f5bd9211cc395c6d3497aaf" Dec 06 08:15:24 crc kubenswrapper[4895]: I1206 08:15:24.051701 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:15:24 crc kubenswrapper[4895]: E1206 08:15:24.052638 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:15:35 crc kubenswrapper[4895]: I1206 08:15:35.051670 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:15:35 crc kubenswrapper[4895]: E1206 08:15:35.052824 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.824338 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2bvpm"] Dec 06 08:15:43 crc kubenswrapper[4895]: E1206 08:15:43.825735 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a9e240-d36a-40a1-8e3e-4995653a3015" containerName="collect-profiles" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.825765 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a9e240-d36a-40a1-8e3e-4995653a3015" containerName="collect-profiles" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.826112 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a9e240-d36a-40a1-8e3e-4995653a3015" containerName="collect-profiles" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.833542 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.838884 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bvpm"] Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.911702 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-catalog-content\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.911775 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-utilities\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:43 crc kubenswrapper[4895]: I1206 08:15:43.911888 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h6cg\" (UniqueName: \"kubernetes.io/projected/763fac29-2e09-48c9-ab2a-89826cef9da0-kube-api-access-6h6cg\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.012882 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-catalog-content\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.012950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-utilities\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.013043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h6cg\" (UniqueName: \"kubernetes.io/projected/763fac29-2e09-48c9-ab2a-89826cef9da0-kube-api-access-6h6cg\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.013523 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-catalog-content\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.013903 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-utilities\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.044745 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h6cg\" (UniqueName: \"kubernetes.io/projected/763fac29-2e09-48c9-ab2a-89826cef9da0-kube-api-access-6h6cg\") pod \"redhat-marketplace-2bvpm\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.160033 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.412573 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bvpm"] Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.929280 4895 generic.go:334] "Generic (PLEG): container finished" podID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerID="435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1" exitCode=0 Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.929351 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bvpm" event={"ID":"763fac29-2e09-48c9-ab2a-89826cef9da0","Type":"ContainerDied","Data":"435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1"} Dec 06 08:15:44 crc kubenswrapper[4895]: I1206 08:15:44.929390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bvpm" event={"ID":"763fac29-2e09-48c9-ab2a-89826cef9da0","Type":"ContainerStarted","Data":"318b30cfb1df9186cee66576aebe7cf57e9ab65d270555e2327bc0c572360db4"} Dec 06 08:15:45 crc kubenswrapper[4895]: I1206 08:15:45.939790 4895 generic.go:334] "Generic (PLEG): container finished" podID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerID="36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e" exitCode=0 Dec 06 08:15:45 crc kubenswrapper[4895]: I1206 08:15:45.939928 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bvpm" event={"ID":"763fac29-2e09-48c9-ab2a-89826cef9da0","Type":"ContainerDied","Data":"36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e"} Dec 06 08:15:46 crc kubenswrapper[4895]: I1206 08:15:46.950843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bvpm" event={"ID":"763fac29-2e09-48c9-ab2a-89826cef9da0","Type":"ContainerStarted","Data":"96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1"} Dec 06 08:15:46 crc kubenswrapper[4895]: I1206 08:15:46.982713 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2bvpm" podStartSLOduration=2.553846486 podStartE2EDuration="3.982690679s" podCreationTimestamp="2025-12-06 08:15:43 +0000 UTC" firstStartedPulling="2025-12-06 08:15:44.930822772 +0000 UTC m=+4707.332211692" lastFinishedPulling="2025-12-06 08:15:46.359667015 +0000 UTC m=+4708.761055885" observedRunningTime="2025-12-06 08:15:46.976793341 +0000 UTC m=+4709.378182221" watchObservedRunningTime="2025-12-06 08:15:46.982690679 +0000 UTC m=+4709.384079559" Dec 06 08:15:49 crc kubenswrapper[4895]: I1206 08:15:49.908806 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4clx"] Dec 06 08:15:49 crc kubenswrapper[4895]: I1206 08:15:49.911823 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:49 crc kubenswrapper[4895]: I1206 08:15:49.928854 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4clx"] Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.004921 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-utilities\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.005105 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-catalog-content\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.005200 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2h6f\" (UniqueName: \"kubernetes.io/projected/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-kube-api-access-c2h6f\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.050938 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:15:50 crc kubenswrapper[4895]: E1206 08:15:50.051305 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.106500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-catalog-content\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.106598 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2h6f\" (UniqueName: \"kubernetes.io/projected/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-kube-api-access-c2h6f\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.106645 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-utilities\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.107265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-catalog-content\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.108824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-utilities\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.293217 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2h6f\" (UniqueName: \"kubernetes.io/projected/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-kube-api-access-c2h6f\") pod \"community-operators-s4clx\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.540954 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:15:50 crc kubenswrapper[4895]: I1206 08:15:50.977799 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4clx"] Dec 06 08:15:51 crc kubenswrapper[4895]: I1206 08:15:51.997040 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerID="d3cb4072ba7cb6c377e5d6e94fe8da30f739584fb7346f5df5a4fdede8d9336e" exitCode=0 Dec 06 08:15:51 crc kubenswrapper[4895]: I1206 08:15:51.997110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerDied","Data":"d3cb4072ba7cb6c377e5d6e94fe8da30f739584fb7346f5df5a4fdede8d9336e"} Dec 06 08:15:51 crc kubenswrapper[4895]: I1206 08:15:51.997427 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerStarted","Data":"9b2b9e9430dd5bb27d000b563c419c29915d07e1bfbf29e2cb76762966dec67c"} Dec 06 08:15:53 crc kubenswrapper[4895]: I1206 08:15:53.007193 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerStarted","Data":"72cd086b5aab022c3469a27515a4822546f08f9cbdef23e17da416eb20bebef2"} Dec 06 08:15:54 crc kubenswrapper[4895]: I1206 08:15:54.031858 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerID="72cd086b5aab022c3469a27515a4822546f08f9cbdef23e17da416eb20bebef2" exitCode=0 Dec 06 08:15:54 crc kubenswrapper[4895]: I1206 08:15:54.032290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerDied","Data":"72cd086b5aab022c3469a27515a4822546f08f9cbdef23e17da416eb20bebef2"} Dec 06 08:15:54 crc kubenswrapper[4895]: I1206 08:15:54.160946 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:54 crc kubenswrapper[4895]: I1206 08:15:54.161022 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:54 crc kubenswrapper[4895]: I1206 08:15:54.226137 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:55 crc kubenswrapper[4895]: I1206 08:15:55.043329 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerStarted","Data":"15f446cb3838eab45be9f58cb7ec17a518d5228ff4d75eaa53072ad658c713a6"} Dec 06 08:15:55 crc kubenswrapper[4895]: I1206 08:15:55.064307 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4clx" podStartSLOduration=3.5177261619999998 podStartE2EDuration="6.064283437s" podCreationTimestamp="2025-12-06 08:15:49 +0000 UTC" firstStartedPulling="2025-12-06 08:15:51.999223338 +0000 UTC m=+4714.400612248" lastFinishedPulling="2025-12-06 08:15:54.545780633 +0000 UTC m=+4716.947169523" observedRunningTime="2025-12-06 08:15:55.062702555 +0000 UTC m=+4717.464091415" watchObservedRunningTime="2025-12-06 08:15:55.064283437 +0000 UTC m=+4717.465672307" Dec 06 08:15:55 crc kubenswrapper[4895]: I1206 08:15:55.115513 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:56 crc kubenswrapper[4895]: I1206 08:15:56.566579 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bvpm"] Dec 06 08:15:57 crc kubenswrapper[4895]: I1206 08:15:57.056705 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2bvpm" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="registry-server" containerID="cri-o://96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1" gracePeriod=2 Dec 06 08:15:57 crc kubenswrapper[4895]: I1206 08:15:57.997799 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.069976 4895 generic.go:334] "Generic (PLEG): container finished" podID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerID="96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1" exitCode=0 Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.070050 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bvpm" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.070061 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bvpm" event={"ID":"763fac29-2e09-48c9-ab2a-89826cef9da0","Type":"ContainerDied","Data":"96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1"} Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.070101 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bvpm" event={"ID":"763fac29-2e09-48c9-ab2a-89826cef9da0","Type":"ContainerDied","Data":"318b30cfb1df9186cee66576aebe7cf57e9ab65d270555e2327bc0c572360db4"} Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.070125 4895 scope.go:117] "RemoveContainer" containerID="96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.090654 4895 scope.go:117] "RemoveContainer" containerID="36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.109814 4895 scope.go:117] "RemoveContainer" containerID="435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.128742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-utilities\") pod \"763fac29-2e09-48c9-ab2a-89826cef9da0\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.129127 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-catalog-content\") pod \"763fac29-2e09-48c9-ab2a-89826cef9da0\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.129301 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h6cg\" (UniqueName: \"kubernetes.io/projected/763fac29-2e09-48c9-ab2a-89826cef9da0-kube-api-access-6h6cg\") pod \"763fac29-2e09-48c9-ab2a-89826cef9da0\" (UID: \"763fac29-2e09-48c9-ab2a-89826cef9da0\") " Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.129676 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-utilities" (OuterVolumeSpecName: "utilities") pod "763fac29-2e09-48c9-ab2a-89826cef9da0" (UID: "763fac29-2e09-48c9-ab2a-89826cef9da0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.130223 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.137524 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763fac29-2e09-48c9-ab2a-89826cef9da0-kube-api-access-6h6cg" (OuterVolumeSpecName: "kube-api-access-6h6cg") pod "763fac29-2e09-48c9-ab2a-89826cef9da0" (UID: "763fac29-2e09-48c9-ab2a-89826cef9da0"). InnerVolumeSpecName "kube-api-access-6h6cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.141014 4895 scope.go:117] "RemoveContainer" containerID="96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1" Dec 06 08:15:58 crc kubenswrapper[4895]: E1206 08:15:58.141615 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1\": container with ID starting with 96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1 not found: ID does not exist" containerID="96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.141682 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1"} err="failed to get container status \"96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1\": rpc error: code = NotFound desc = could not find container \"96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1\": container with ID starting with 96e505cc893c3e10ff6051cda9c8855df81031570d892a9ad4570f913add0bd1 not found: ID does not exist" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.141732 4895 scope.go:117] "RemoveContainer" containerID="36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e" Dec 06 08:15:58 crc kubenswrapper[4895]: E1206 08:15:58.142304 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e\": container with ID starting with 36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e not found: ID does not exist" containerID="36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.142367 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e"} err="failed to get container status \"36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e\": rpc error: code = NotFound desc = could not find container \"36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e\": container with ID starting with 36b70a3d80573827ed529f3970ee7dbd2d3d415ef1c2fb7a2371306fa10b2a4e not found: ID does not exist" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.142395 4895 scope.go:117] "RemoveContainer" containerID="435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1" Dec 06 08:15:58 crc kubenswrapper[4895]: E1206 08:15:58.142802 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1\": container with ID starting with 435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1 not found: ID does not exist" containerID="435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.142844 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1"} err="failed to get container status \"435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1\": rpc error: code = NotFound desc = could not find container \"435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1\": container with ID starting with 435de7cfe339ad8950688a94f45a024ade34b1d3ab48434db6ed47037aa0c4d1 not found: ID does not exist" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.157774 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "763fac29-2e09-48c9-ab2a-89826cef9da0" (UID: "763fac29-2e09-48c9-ab2a-89826cef9da0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.235555 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763fac29-2e09-48c9-ab2a-89826cef9da0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.235602 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h6cg\" (UniqueName: \"kubernetes.io/projected/763fac29-2e09-48c9-ab2a-89826cef9da0-kube-api-access-6h6cg\") on node \"crc\" DevicePath \"\"" Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.403821 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bvpm"] Dec 06 08:15:58 crc kubenswrapper[4895]: I1206 08:15:58.414640 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bvpm"] Dec 06 08:16:00 crc kubenswrapper[4895]: I1206 08:16:00.062730 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" path="/var/lib/kubelet/pods/763fac29-2e09-48c9-ab2a-89826cef9da0/volumes" Dec 06 08:16:00 crc kubenswrapper[4895]: I1206 08:16:00.541452 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:16:00 crc kubenswrapper[4895]: I1206 08:16:00.541506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:16:00 crc kubenswrapper[4895]: I1206 08:16:00.597867 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:16:01 crc kubenswrapper[4895]: I1206 08:16:01.224857 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:16:01 crc kubenswrapper[4895]: I1206 08:16:01.566578 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4clx"] Dec 06 08:16:03 crc kubenswrapper[4895]: I1206 08:16:03.050171 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:16:03 crc kubenswrapper[4895]: I1206 08:16:03.112007 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4clx" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="registry-server" containerID="cri-o://15f446cb3838eab45be9f58cb7ec17a518d5228ff4d75eaa53072ad658c713a6" gracePeriod=2 Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.121756 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"e7e23d62a91fdd8275ac488be585c323dab52909b70e52954df201c867abf3e2"} Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.125247 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerID="15f446cb3838eab45be9f58cb7ec17a518d5228ff4d75eaa53072ad658c713a6" exitCode=0 Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.125288 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerDied","Data":"15f446cb3838eab45be9f58cb7ec17a518d5228ff4d75eaa53072ad658c713a6"} Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.692848 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.747804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-catalog-content\") pod \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.747978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2h6f\" (UniqueName: \"kubernetes.io/projected/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-kube-api-access-c2h6f\") pod \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.748016 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-utilities\") pod \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\" (UID: \"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394\") " Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.749302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-utilities" (OuterVolumeSpecName: "utilities") pod "a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" (UID: "a1d8e88c-b726-4a0f-9ddd-3c36dbeec394"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.754641 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-kube-api-access-c2h6f" (OuterVolumeSpecName: "kube-api-access-c2h6f") pod "a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" (UID: "a1d8e88c-b726-4a0f-9ddd-3c36dbeec394"). InnerVolumeSpecName "kube-api-access-c2h6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.809021 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" (UID: "a1d8e88c-b726-4a0f-9ddd-3c36dbeec394"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.849350 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2h6f\" (UniqueName: \"kubernetes.io/projected/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-kube-api-access-c2h6f\") on node \"crc\" DevicePath \"\"" Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.849395 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:16:04 crc kubenswrapper[4895]: I1206 08:16:04.849409 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.135339 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4clx" event={"ID":"a1d8e88c-b726-4a0f-9ddd-3c36dbeec394","Type":"ContainerDied","Data":"9b2b9e9430dd5bb27d000b563c419c29915d07e1bfbf29e2cb76762966dec67c"} Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.135398 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4clx" Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.135400 4895 scope.go:117] "RemoveContainer" containerID="15f446cb3838eab45be9f58cb7ec17a518d5228ff4d75eaa53072ad658c713a6" Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.167800 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4clx"] Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.169597 4895 scope.go:117] "RemoveContainer" containerID="72cd086b5aab022c3469a27515a4822546f08f9cbdef23e17da416eb20bebef2" Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.191856 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4clx"] Dec 06 08:16:05 crc kubenswrapper[4895]: I1206 08:16:05.195367 4895 scope.go:117] "RemoveContainer" containerID="d3cb4072ba7cb6c377e5d6e94fe8da30f739584fb7346f5df5a4fdede8d9336e" Dec 06 08:16:06 crc kubenswrapper[4895]: I1206 08:16:06.065531 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" path="/var/lib/kubelet/pods/a1d8e88c-b726-4a0f-9ddd-3c36dbeec394/volumes" Dec 06 08:18:08 crc kubenswrapper[4895]: I1206 08:18:08.843562 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" podUID="20e10bde-64c1-402d-914e-2bfeef28267e" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.51:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 08:18:08 crc kubenswrapper[4895]: I1206 08:18:08.843589 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-55976579dc-68gpl" podUID="20e10bde-64c1-402d-914e-2bfeef28267e" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.51:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 08:18:29 crc kubenswrapper[4895]: I1206 08:18:29.696519 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:18:29 crc kubenswrapper[4895]: I1206 08:18:29.697454 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:18:59 crc kubenswrapper[4895]: I1206 08:18:59.696049 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:18:59 crc kubenswrapper[4895]: I1206 08:18:59.696749 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:19:29 crc kubenswrapper[4895]: I1206 08:19:29.696430 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:19:29 crc kubenswrapper[4895]: I1206 08:19:29.697300 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:19:29 crc kubenswrapper[4895]: I1206 08:19:29.697418 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:19:29 crc kubenswrapper[4895]: I1206 08:19:29.698671 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7e23d62a91fdd8275ac488be585c323dab52909b70e52954df201c867abf3e2"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:19:29 crc kubenswrapper[4895]: I1206 08:19:29.698804 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://e7e23d62a91fdd8275ac488be585c323dab52909b70e52954df201c867abf3e2" gracePeriod=600 Dec 06 08:19:30 crc kubenswrapper[4895]: I1206 08:19:30.944432 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="e7e23d62a91fdd8275ac488be585c323dab52909b70e52954df201c867abf3e2" exitCode=0 Dec 06 08:19:30 crc kubenswrapper[4895]: I1206 08:19:30.944537 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"e7e23d62a91fdd8275ac488be585c323dab52909b70e52954df201c867abf3e2"} Dec 06 08:19:30 crc kubenswrapper[4895]: I1206 08:19:30.945753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c"} Dec 06 08:19:30 crc kubenswrapper[4895]: I1206 08:19:30.945831 4895 scope.go:117] "RemoveContainer" containerID="9cd0e3ec0fc739b5951c056d6b1d52542a114e8cf9bff266c2029e823e3d12de" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.955859 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rpgdt"] Dec 06 08:20:00 crc kubenswrapper[4895]: E1206 08:20:00.956916 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="extract-utilities" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.956933 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="extract-utilities" Dec 06 08:20:00 crc kubenswrapper[4895]: E1206 08:20:00.956951 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="registry-server" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.956960 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="registry-server" Dec 06 08:20:00 crc kubenswrapper[4895]: E1206 08:20:00.956974 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="extract-content" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.956981 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="extract-content" Dec 06 08:20:00 crc kubenswrapper[4895]: E1206 08:20:00.956993 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="extract-utilities" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.957000 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="extract-utilities" Dec 06 08:20:00 crc kubenswrapper[4895]: E1206 08:20:00.957020 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="registry-server" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.957028 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="registry-server" Dec 06 08:20:00 crc kubenswrapper[4895]: E1206 08:20:00.957036 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="extract-content" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.957064 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="extract-content" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.957283 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d8e88c-b726-4a0f-9ddd-3c36dbeec394" containerName="registry-server" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.957300 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="763fac29-2e09-48c9-ab2a-89826cef9da0" containerName="registry-server" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.958576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:00 crc kubenswrapper[4895]: I1206 08:20:00.977991 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rpgdt"] Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.035162 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-utilities\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.035309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487hf\" (UniqueName: \"kubernetes.io/projected/476a4f29-17d7-4fc3-8009-68c90f9f364a-kube-api-access-487hf\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.035334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-catalog-content\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.138316 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-487hf\" (UniqueName: \"kubernetes.io/projected/476a4f29-17d7-4fc3-8009-68c90f9f364a-kube-api-access-487hf\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.138370 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-catalog-content\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.138821 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-catalog-content\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.138893 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-utilities\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.139241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-utilities\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.170668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-487hf\" (UniqueName: \"kubernetes.io/projected/476a4f29-17d7-4fc3-8009-68c90f9f364a-kube-api-access-487hf\") pod \"redhat-operators-rpgdt\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.293624 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:01 crc kubenswrapper[4895]: I1206 08:20:01.761026 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rpgdt"] Dec 06 08:20:02 crc kubenswrapper[4895]: I1206 08:20:02.226791 4895 generic.go:334] "Generic (PLEG): container finished" podID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerID="736984a0e595733b18d099a07f377e8e215f7670d8ddbb2604cbf6e248a405e3" exitCode=0 Dec 06 08:20:02 crc kubenswrapper[4895]: I1206 08:20:02.226857 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerDied","Data":"736984a0e595733b18d099a07f377e8e215f7670d8ddbb2604cbf6e248a405e3"} Dec 06 08:20:02 crc kubenswrapper[4895]: I1206 08:20:02.227030 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerStarted","Data":"0da1dfa136055c61317e58daf7dd763e2f2d70414ebd524782b49ade869aeccd"} Dec 06 08:20:02 crc kubenswrapper[4895]: I1206 08:20:02.228759 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:20:03 crc kubenswrapper[4895]: I1206 08:20:03.239070 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerStarted","Data":"39c7cc48e20c3a781a418e416cd36f9be70a8502164a89320db90879cfe5833f"} Dec 06 08:20:04 crc kubenswrapper[4895]: I1206 08:20:04.249149 4895 generic.go:334] "Generic (PLEG): container finished" podID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerID="39c7cc48e20c3a781a418e416cd36f9be70a8502164a89320db90879cfe5833f" exitCode=0 Dec 06 08:20:04 crc kubenswrapper[4895]: I1206 08:20:04.249262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerDied","Data":"39c7cc48e20c3a781a418e416cd36f9be70a8502164a89320db90879cfe5833f"} Dec 06 08:20:05 crc kubenswrapper[4895]: I1206 08:20:05.262149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerStarted","Data":"443f581cbfb84c2658f5c50e819d4a607a755c41de11df83aee09c2b2b0e279a"} Dec 06 08:20:05 crc kubenswrapper[4895]: I1206 08:20:05.287384 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rpgdt" podStartSLOduration=2.8771465259999998 podStartE2EDuration="5.287350729s" podCreationTimestamp="2025-12-06 08:20:00 +0000 UTC" firstStartedPulling="2025-12-06 08:20:02.228430083 +0000 UTC m=+4964.629818953" lastFinishedPulling="2025-12-06 08:20:04.638634296 +0000 UTC m=+4967.040023156" observedRunningTime="2025-12-06 08:20:05.279222631 +0000 UTC m=+4967.680611511" watchObservedRunningTime="2025-12-06 08:20:05.287350729 +0000 UTC m=+4967.688739639" Dec 06 08:20:11 crc kubenswrapper[4895]: I1206 08:20:11.294197 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:11 crc kubenswrapper[4895]: I1206 08:20:11.294764 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:11 crc kubenswrapper[4895]: I1206 08:20:11.366219 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:11 crc kubenswrapper[4895]: I1206 08:20:11.430430 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:11 crc kubenswrapper[4895]: I1206 08:20:11.608443 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rpgdt"] Dec 06 08:20:13 crc kubenswrapper[4895]: I1206 08:20:13.340894 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rpgdt" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="registry-server" containerID="cri-o://443f581cbfb84c2658f5c50e819d4a607a755c41de11df83aee09c2b2b0e279a" gracePeriod=2 Dec 06 08:20:14 crc kubenswrapper[4895]: I1206 08:20:14.352700 4895 generic.go:334] "Generic (PLEG): container finished" podID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerID="443f581cbfb84c2658f5c50e819d4a607a755c41de11df83aee09c2b2b0e279a" exitCode=0 Dec 06 08:20:14 crc kubenswrapper[4895]: I1206 08:20:14.353067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerDied","Data":"443f581cbfb84c2658f5c50e819d4a607a755c41de11df83aee09c2b2b0e279a"} Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.780166 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.868500 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-487hf\" (UniqueName: \"kubernetes.io/projected/476a4f29-17d7-4fc3-8009-68c90f9f364a-kube-api-access-487hf\") pod \"476a4f29-17d7-4fc3-8009-68c90f9f364a\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.868571 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-utilities\") pod \"476a4f29-17d7-4fc3-8009-68c90f9f364a\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.868629 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-catalog-content\") pod \"476a4f29-17d7-4fc3-8009-68c90f9f364a\" (UID: \"476a4f29-17d7-4fc3-8009-68c90f9f364a\") " Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.869660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-utilities" (OuterVolumeSpecName: "utilities") pod "476a4f29-17d7-4fc3-8009-68c90f9f364a" (UID: "476a4f29-17d7-4fc3-8009-68c90f9f364a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.870028 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.876692 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476a4f29-17d7-4fc3-8009-68c90f9f364a-kube-api-access-487hf" (OuterVolumeSpecName: "kube-api-access-487hf") pod "476a4f29-17d7-4fc3-8009-68c90f9f364a" (UID: "476a4f29-17d7-4fc3-8009-68c90f9f364a"). InnerVolumeSpecName "kube-api-access-487hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.971187 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-487hf\" (UniqueName: \"kubernetes.io/projected/476a4f29-17d7-4fc3-8009-68c90f9f364a-kube-api-access-487hf\") on node \"crc\" DevicePath \"\"" Dec 06 08:20:15 crc kubenswrapper[4895]: I1206 08:20:15.983267 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "476a4f29-17d7-4fc3-8009-68c90f9f364a" (UID: "476a4f29-17d7-4fc3-8009-68c90f9f364a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.073969 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476a4f29-17d7-4fc3-8009-68c90f9f364a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.384505 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpgdt" event={"ID":"476a4f29-17d7-4fc3-8009-68c90f9f364a","Type":"ContainerDied","Data":"0da1dfa136055c61317e58daf7dd763e2f2d70414ebd524782b49ade869aeccd"} Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.384571 4895 scope.go:117] "RemoveContainer" containerID="443f581cbfb84c2658f5c50e819d4a607a755c41de11df83aee09c2b2b0e279a" Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.384689 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpgdt" Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.428093 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rpgdt"] Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.431100 4895 scope.go:117] "RemoveContainer" containerID="39c7cc48e20c3a781a418e416cd36f9be70a8502164a89320db90879cfe5833f" Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.440232 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rpgdt"] Dec 06 08:20:16 crc kubenswrapper[4895]: I1206 08:20:16.470755 4895 scope.go:117] "RemoveContainer" containerID="736984a0e595733b18d099a07f377e8e215f7670d8ddbb2604cbf6e248a405e3" Dec 06 08:20:18 crc kubenswrapper[4895]: I1206 08:20:18.088615 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" path="/var/lib/kubelet/pods/476a4f29-17d7-4fc3-8009-68c90f9f364a/volumes" Dec 06 08:21:59 crc kubenswrapper[4895]: I1206 08:21:59.696222 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:21:59 crc kubenswrapper[4895]: I1206 08:21:59.696797 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:22:29 crc kubenswrapper[4895]: I1206 08:22:29.696505 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:22:29 crc kubenswrapper[4895]: I1206 08:22:29.697738 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.585466 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9gpl"] Dec 06 08:22:58 crc kubenswrapper[4895]: E1206 08:22:58.586603 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="extract-content" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.586627 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="extract-content" Dec 06 08:22:58 crc kubenswrapper[4895]: E1206 08:22:58.586651 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="extract-utilities" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.586662 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="extract-utilities" Dec 06 08:22:58 crc kubenswrapper[4895]: E1206 08:22:58.586682 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="registry-server" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.586694 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="registry-server" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.586937 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="476a4f29-17d7-4fc3-8009-68c90f9f364a" containerName="registry-server" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.588615 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.603363 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9gpl"] Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.640447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-catalog-content\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.640557 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twtjt\" (UniqueName: \"kubernetes.io/projected/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-kube-api-access-twtjt\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.640616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-utilities\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.742212 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twtjt\" (UniqueName: \"kubernetes.io/projected/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-kube-api-access-twtjt\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.742284 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-utilities\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.742348 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-catalog-content\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.742760 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-utilities\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.742810 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-catalog-content\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.762780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twtjt\" (UniqueName: \"kubernetes.io/projected/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-kube-api-access-twtjt\") pod \"certified-operators-v9gpl\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:58 crc kubenswrapper[4895]: I1206 08:22:58.912403 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.451679 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9gpl"] Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.695806 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.696124 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.696170 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.696829 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.696887 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" gracePeriod=600 Dec 06 08:22:59 crc kubenswrapper[4895]: I1206 08:22:59.774792 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerStarted","Data":"55a6d56f373463d410b86b1dc0d99020b449c82fff04e433b01668028e379da9"} Dec 06 08:23:00 crc kubenswrapper[4895]: E1206 08:23:00.331716 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:23:00 crc kubenswrapper[4895]: I1206 08:23:00.784272 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" exitCode=0 Dec 06 08:23:00 crc kubenswrapper[4895]: I1206 08:23:00.784386 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c"} Dec 06 08:23:00 crc kubenswrapper[4895]: I1206 08:23:00.785214 4895 scope.go:117] "RemoveContainer" containerID="e7e23d62a91fdd8275ac488be585c323dab52909b70e52954df201c867abf3e2" Dec 06 08:23:00 crc kubenswrapper[4895]: I1206 08:23:00.785969 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:23:00 crc kubenswrapper[4895]: E1206 08:23:00.786269 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:23:00 crc kubenswrapper[4895]: I1206 08:23:00.790182 4895 generic.go:334] "Generic (PLEG): container finished" podID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerID="8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0" exitCode=0 Dec 06 08:23:00 crc kubenswrapper[4895]: I1206 08:23:00.790231 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerDied","Data":"8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0"} Dec 06 08:23:01 crc kubenswrapper[4895]: I1206 08:23:01.800489 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerStarted","Data":"f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408"} Dec 06 08:23:02 crc kubenswrapper[4895]: I1206 08:23:02.811055 4895 generic.go:334] "Generic (PLEG): container finished" podID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerID="f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408" exitCode=0 Dec 06 08:23:02 crc kubenswrapper[4895]: I1206 08:23:02.811148 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerDied","Data":"f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408"} Dec 06 08:23:03 crc kubenswrapper[4895]: I1206 08:23:03.825149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerStarted","Data":"5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351"} Dec 06 08:23:03 crc kubenswrapper[4895]: I1206 08:23:03.843894 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9gpl" podStartSLOduration=3.381594629 podStartE2EDuration="5.843870597s" podCreationTimestamp="2025-12-06 08:22:58 +0000 UTC" firstStartedPulling="2025-12-06 08:23:00.791498327 +0000 UTC m=+5143.192887197" lastFinishedPulling="2025-12-06 08:23:03.253774295 +0000 UTC m=+5145.655163165" observedRunningTime="2025-12-06 08:23:03.841304108 +0000 UTC m=+5146.242692988" watchObservedRunningTime="2025-12-06 08:23:03.843870597 +0000 UTC m=+5146.245259477" Dec 06 08:23:08 crc kubenswrapper[4895]: I1206 08:23:08.913571 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:23:08 crc kubenswrapper[4895]: I1206 08:23:08.914258 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:23:08 crc kubenswrapper[4895]: I1206 08:23:08.964128 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:23:09 crc kubenswrapper[4895]: I1206 08:23:09.920158 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:23:09 crc kubenswrapper[4895]: I1206 08:23:09.967166 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9gpl"] Dec 06 08:23:11 crc kubenswrapper[4895]: I1206 08:23:11.879872 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9gpl" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="registry-server" containerID="cri-o://5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351" gracePeriod=2 Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.418330 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.589997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twtjt\" (UniqueName: \"kubernetes.io/projected/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-kube-api-access-twtjt\") pod \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.590097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-catalog-content\") pod \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.590273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-utilities\") pod \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\" (UID: \"9ee914f0-9d68-494a-91c2-9f6e9c5afe74\") " Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.591687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-utilities" (OuterVolumeSpecName: "utilities") pod "9ee914f0-9d68-494a-91c2-9f6e9c5afe74" (UID: "9ee914f0-9d68-494a-91c2-9f6e9c5afe74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.598764 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-kube-api-access-twtjt" (OuterVolumeSpecName: "kube-api-access-twtjt") pod "9ee914f0-9d68-494a-91c2-9f6e9c5afe74" (UID: "9ee914f0-9d68-494a-91c2-9f6e9c5afe74"). InnerVolumeSpecName "kube-api-access-twtjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.672221 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ee914f0-9d68-494a-91c2-9f6e9c5afe74" (UID: "9ee914f0-9d68-494a-91c2-9f6e9c5afe74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.692266 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twtjt\" (UniqueName: \"kubernetes.io/projected/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-kube-api-access-twtjt\") on node \"crc\" DevicePath \"\"" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.692304 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.692313 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee914f0-9d68-494a-91c2-9f6e9c5afe74-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.908663 4895 generic.go:334] "Generic (PLEG): container finished" podID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerID="5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351" exitCode=0 Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.908708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerDied","Data":"5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351"} Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.908741 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9gpl" event={"ID":"9ee914f0-9d68-494a-91c2-9f6e9c5afe74","Type":"ContainerDied","Data":"55a6d56f373463d410b86b1dc0d99020b449c82fff04e433b01668028e379da9"} Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.908758 4895 scope.go:117] "RemoveContainer" containerID="5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.908876 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9gpl" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.944858 4895 scope.go:117] "RemoveContainer" containerID="f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408" Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.948405 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9gpl"] Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.956186 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9gpl"] Dec 06 08:23:13 crc kubenswrapper[4895]: I1206 08:23:13.980884 4895 scope.go:117] "RemoveContainer" containerID="8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.002658 4895 scope.go:117] "RemoveContainer" containerID="5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351" Dec 06 08:23:14 crc kubenswrapper[4895]: E1206 08:23:14.003248 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351\": container with ID starting with 5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351 not found: ID does not exist" containerID="5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.003304 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351"} err="failed to get container status \"5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351\": rpc error: code = NotFound desc = could not find container \"5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351\": container with ID starting with 5bab3cbe06d083ada58176ced7bcf36a4c985470e0beee71aa1e22a6ee477351 not found: ID does not exist" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.003337 4895 scope.go:117] "RemoveContainer" containerID="f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408" Dec 06 08:23:14 crc kubenswrapper[4895]: E1206 08:23:14.003817 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408\": container with ID starting with f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408 not found: ID does not exist" containerID="f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.003874 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408"} err="failed to get container status \"f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408\": rpc error: code = NotFound desc = could not find container \"f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408\": container with ID starting with f93e8ae07cde17624e49eda8f7983dd1e6173e51414431632caa42a958b60408 not found: ID does not exist" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.003912 4895 scope.go:117] "RemoveContainer" containerID="8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0" Dec 06 08:23:14 crc kubenswrapper[4895]: E1206 08:23:14.004219 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0\": container with ID starting with 8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0 not found: ID does not exist" containerID="8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.004246 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0"} err="failed to get container status \"8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0\": rpc error: code = NotFound desc = could not find container \"8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0\": container with ID starting with 8d46cb08f13aeab5910282912b0099a1a8de540cd9d219bca0e770831c9828e0 not found: ID does not exist" Dec 06 08:23:14 crc kubenswrapper[4895]: I1206 08:23:14.061101 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" path="/var/lib/kubelet/pods/9ee914f0-9d68-494a-91c2-9f6e9c5afe74/volumes" Dec 06 08:23:16 crc kubenswrapper[4895]: I1206 08:23:16.050403 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:23:16 crc kubenswrapper[4895]: E1206 08:23:16.050832 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:23:28 crc kubenswrapper[4895]: I1206 08:23:28.055408 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:23:28 crc kubenswrapper[4895]: E1206 08:23:28.056376 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:23:40 crc kubenswrapper[4895]: I1206 08:23:40.051048 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:23:40 crc kubenswrapper[4895]: E1206 08:23:40.051787 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:23:55 crc kubenswrapper[4895]: I1206 08:23:55.051450 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:23:55 crc kubenswrapper[4895]: E1206 08:23:55.052409 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:24:07 crc kubenswrapper[4895]: I1206 08:24:07.050678 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:24:07 crc kubenswrapper[4895]: E1206 08:24:07.051657 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:24:22 crc kubenswrapper[4895]: I1206 08:24:22.050592 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:24:22 crc kubenswrapper[4895]: E1206 08:24:22.051625 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:24:33 crc kubenswrapper[4895]: I1206 08:24:33.051028 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:24:33 crc kubenswrapper[4895]: E1206 08:24:33.051830 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:24:47 crc kubenswrapper[4895]: I1206 08:24:47.051217 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:24:47 crc kubenswrapper[4895]: E1206 08:24:47.052361 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:24:58 crc kubenswrapper[4895]: I1206 08:24:58.055319 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:24:58 crc kubenswrapper[4895]: E1206 08:24:58.056228 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:25:11 crc kubenswrapper[4895]: I1206 08:25:11.051501 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:25:11 crc kubenswrapper[4895]: E1206 08:25:11.052422 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:25:25 crc kubenswrapper[4895]: I1206 08:25:25.050712 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:25:25 crc kubenswrapper[4895]: E1206 08:25:25.052075 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:25:38 crc kubenswrapper[4895]: I1206 08:25:38.054777 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:25:38 crc kubenswrapper[4895]: E1206 08:25:38.056382 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:25:50 crc kubenswrapper[4895]: I1206 08:25:50.052408 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:25:50 crc kubenswrapper[4895]: E1206 08:25:50.053399 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.832998 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dqdfh"] Dec 06 08:26:04 crc kubenswrapper[4895]: E1206 08:26:04.834028 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="extract-utilities" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.834048 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="extract-utilities" Dec 06 08:26:04 crc kubenswrapper[4895]: E1206 08:26:04.834075 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="extract-content" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.834085 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="extract-content" Dec 06 08:26:04 crc kubenswrapper[4895]: E1206 08:26:04.834097 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="registry-server" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.834107 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="registry-server" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.834390 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee914f0-9d68-494a-91c2-9f6e9c5afe74" containerName="registry-server" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.836242 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.851050 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqdfh"] Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.931031 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fplx\" (UniqueName: \"kubernetes.io/projected/54ffc678-4fcd-47b6-b766-0a069479c98b-kube-api-access-6fplx\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.931106 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-utilities\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:04 crc kubenswrapper[4895]: I1206 08:26:04.931154 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-catalog-content\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.032696 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-utilities\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.032768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-catalog-content\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.032868 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fplx\" (UniqueName: \"kubernetes.io/projected/54ffc678-4fcd-47b6-b766-0a069479c98b-kube-api-access-6fplx\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.033201 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-utilities\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.033571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-catalog-content\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.051124 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:26:05 crc kubenswrapper[4895]: E1206 08:26:05.051542 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.078787 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fplx\" (UniqueName: \"kubernetes.io/projected/54ffc678-4fcd-47b6-b766-0a069479c98b-kube-api-access-6fplx\") pod \"community-operators-dqdfh\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.177151 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:05 crc kubenswrapper[4895]: I1206 08:26:05.679569 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqdfh"] Dec 06 08:26:06 crc kubenswrapper[4895]: I1206 08:26:06.669614 4895 generic.go:334] "Generic (PLEG): container finished" podID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerID="831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659" exitCode=0 Dec 06 08:26:06 crc kubenswrapper[4895]: I1206 08:26:06.669681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerDied","Data":"831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659"} Dec 06 08:26:06 crc kubenswrapper[4895]: I1206 08:26:06.669921 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerStarted","Data":"d5c61bb91875b398bfd3b13a23745a19b8f8a3e4510d52e5270eb746dd0da248"} Dec 06 08:26:06 crc kubenswrapper[4895]: I1206 08:26:06.672420 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:26:07 crc kubenswrapper[4895]: I1206 08:26:07.682504 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerStarted","Data":"50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e"} Dec 06 08:26:08 crc kubenswrapper[4895]: I1206 08:26:08.694728 4895 generic.go:334] "Generic (PLEG): container finished" podID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerID="50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e" exitCode=0 Dec 06 08:26:08 crc kubenswrapper[4895]: I1206 08:26:08.694815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerDied","Data":"50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e"} Dec 06 08:26:09 crc kubenswrapper[4895]: I1206 08:26:09.705521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerStarted","Data":"b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496"} Dec 06 08:26:09 crc kubenswrapper[4895]: I1206 08:26:09.728298 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dqdfh" podStartSLOduration=3.233797707 podStartE2EDuration="5.728280587s" podCreationTimestamp="2025-12-06 08:26:04 +0000 UTC" firstStartedPulling="2025-12-06 08:26:06.672106785 +0000 UTC m=+5329.073495655" lastFinishedPulling="2025-12-06 08:26:09.166589625 +0000 UTC m=+5331.567978535" observedRunningTime="2025-12-06 08:26:09.720436826 +0000 UTC m=+5332.121825716" watchObservedRunningTime="2025-12-06 08:26:09.728280587 +0000 UTC m=+5332.129669457" Dec 06 08:26:15 crc kubenswrapper[4895]: I1206 08:26:15.177743 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:15 crc kubenswrapper[4895]: I1206 08:26:15.178344 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:15 crc kubenswrapper[4895]: I1206 08:26:15.226734 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:15 crc kubenswrapper[4895]: I1206 08:26:15.788729 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:15 crc kubenswrapper[4895]: I1206 08:26:15.859911 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqdfh"] Dec 06 08:26:17 crc kubenswrapper[4895]: I1206 08:26:17.760974 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dqdfh" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="registry-server" containerID="cri-o://b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496" gracePeriod=2 Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.149747 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.328910 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-utilities\") pod \"54ffc678-4fcd-47b6-b766-0a069479c98b\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.328989 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fplx\" (UniqueName: \"kubernetes.io/projected/54ffc678-4fcd-47b6-b766-0a069479c98b-kube-api-access-6fplx\") pod \"54ffc678-4fcd-47b6-b766-0a069479c98b\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.329236 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-catalog-content\") pod \"54ffc678-4fcd-47b6-b766-0a069479c98b\" (UID: \"54ffc678-4fcd-47b6-b766-0a069479c98b\") " Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.330590 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-utilities" (OuterVolumeSpecName: "utilities") pod "54ffc678-4fcd-47b6-b766-0a069479c98b" (UID: "54ffc678-4fcd-47b6-b766-0a069479c98b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.334224 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ffc678-4fcd-47b6-b766-0a069479c98b-kube-api-access-6fplx" (OuterVolumeSpecName: "kube-api-access-6fplx") pod "54ffc678-4fcd-47b6-b766-0a069479c98b" (UID: "54ffc678-4fcd-47b6-b766-0a069479c98b"). InnerVolumeSpecName "kube-api-access-6fplx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.391073 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54ffc678-4fcd-47b6-b766-0a069479c98b" (UID: "54ffc678-4fcd-47b6-b766-0a069479c98b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.431746 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.431790 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ffc678-4fcd-47b6-b766-0a069479c98b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.431831 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fplx\" (UniqueName: \"kubernetes.io/projected/54ffc678-4fcd-47b6-b766-0a069479c98b-kube-api-access-6fplx\") on node \"crc\" DevicePath \"\"" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.769640 4895 generic.go:334] "Generic (PLEG): container finished" podID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerID="b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496" exitCode=0 Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.769785 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqdfh" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.769745 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerDied","Data":"b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496"} Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.770141 4895 scope.go:117] "RemoveContainer" containerID="b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.770388 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqdfh" event={"ID":"54ffc678-4fcd-47b6-b766-0a069479c98b","Type":"ContainerDied","Data":"d5c61bb91875b398bfd3b13a23745a19b8f8a3e4510d52e5270eb746dd0da248"} Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.787778 4895 scope.go:117] "RemoveContainer" containerID="50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.814766 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqdfh"] Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.819069 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dqdfh"] Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.834696 4895 scope.go:117] "RemoveContainer" containerID="831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.856815 4895 scope.go:117] "RemoveContainer" containerID="b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496" Dec 06 08:26:18 crc kubenswrapper[4895]: E1206 08:26:18.857221 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496\": container with ID starting with b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496 not found: ID does not exist" containerID="b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.857261 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496"} err="failed to get container status \"b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496\": rpc error: code = NotFound desc = could not find container \"b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496\": container with ID starting with b96c357f465781f2720d9c2a5e9646ef34591872525c1cca57926b28bde47496 not found: ID does not exist" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.857286 4895 scope.go:117] "RemoveContainer" containerID="50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e" Dec 06 08:26:18 crc kubenswrapper[4895]: E1206 08:26:18.857802 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e\": container with ID starting with 50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e not found: ID does not exist" containerID="50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.857844 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e"} err="failed to get container status \"50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e\": rpc error: code = NotFound desc = could not find container \"50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e\": container with ID starting with 50632e766e2b0b75b4b9656dcf303ef09cedad8265e4730c73d8aff97e81301e not found: ID does not exist" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.857875 4895 scope.go:117] "RemoveContainer" containerID="831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659" Dec 06 08:26:18 crc kubenswrapper[4895]: E1206 08:26:18.858327 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659\": container with ID starting with 831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659 not found: ID does not exist" containerID="831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659" Dec 06 08:26:18 crc kubenswrapper[4895]: I1206 08:26:18.858368 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659"} err="failed to get container status \"831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659\": rpc error: code = NotFound desc = could not find container \"831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659\": container with ID starting with 831f89605c2e619c392aa5f005c16a43e28df05082a390201495f692d1dad659 not found: ID does not exist" Dec 06 08:26:20 crc kubenswrapper[4895]: I1206 08:26:20.053230 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:26:20 crc kubenswrapper[4895]: E1206 08:26:20.054576 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:26:20 crc kubenswrapper[4895]: I1206 08:26:20.063968 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" path="/var/lib/kubelet/pods/54ffc678-4fcd-47b6-b766-0a069479c98b/volumes" Dec 06 08:26:34 crc kubenswrapper[4895]: I1206 08:26:34.051449 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:26:34 crc kubenswrapper[4895]: E1206 08:26:34.052816 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:26:46 crc kubenswrapper[4895]: I1206 08:26:46.051302 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:26:46 crc kubenswrapper[4895]: E1206 08:26:46.052314 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.749225 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rzkq8"] Dec 06 08:26:56 crc kubenswrapper[4895]: E1206 08:26:56.750191 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="extract-utilities" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.750208 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="extract-utilities" Dec 06 08:26:56 crc kubenswrapper[4895]: E1206 08:26:56.750228 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="registry-server" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.750237 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="registry-server" Dec 06 08:26:56 crc kubenswrapper[4895]: E1206 08:26:56.750268 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="extract-content" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.750278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="extract-content" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.750462 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ffc678-4fcd-47b6-b766-0a069479c98b" containerName="registry-server" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.751769 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.761151 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzkq8"] Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.908307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhck7\" (UniqueName: \"kubernetes.io/projected/3774b045-fabb-4f71-95c9-e1ab80718d5b-kube-api-access-xhck7\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.908441 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-utilities\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:56 crc kubenswrapper[4895]: I1206 08:26:56.908499 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-catalog-content\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.010286 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhck7\" (UniqueName: \"kubernetes.io/projected/3774b045-fabb-4f71-95c9-e1ab80718d5b-kube-api-access-xhck7\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.010388 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-utilities\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.010415 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-catalog-content\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.010857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-catalog-content\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.011074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-utilities\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.027779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhck7\" (UniqueName: \"kubernetes.io/projected/3774b045-fabb-4f71-95c9-e1ab80718d5b-kube-api-access-xhck7\") pod \"redhat-marketplace-rzkq8\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.083666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:26:57 crc kubenswrapper[4895]: I1206 08:26:57.535017 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzkq8"] Dec 06 08:26:58 crc kubenswrapper[4895]: I1206 08:26:58.055837 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:26:58 crc kubenswrapper[4895]: E1206 08:26:58.056053 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:26:58 crc kubenswrapper[4895]: I1206 08:26:58.092231 4895 generic.go:334] "Generic (PLEG): container finished" podID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerID="d4764678bfb8ccaff652bd6393feba7755c646f4cbc9e0f42bd81f05d4d42a8a" exitCode=0 Dec 06 08:26:58 crc kubenswrapper[4895]: I1206 08:26:58.092347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzkq8" event={"ID":"3774b045-fabb-4f71-95c9-e1ab80718d5b","Type":"ContainerDied","Data":"d4764678bfb8ccaff652bd6393feba7755c646f4cbc9e0f42bd81f05d4d42a8a"} Dec 06 08:26:58 crc kubenswrapper[4895]: I1206 08:26:58.092701 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzkq8" event={"ID":"3774b045-fabb-4f71-95c9-e1ab80718d5b","Type":"ContainerStarted","Data":"d226b664c714be56a15486a5f69068eb7e8165cb94ce4a79b168128d406be8b1"} Dec 06 08:26:59 crc kubenswrapper[4895]: I1206 08:26:59.102794 4895 generic.go:334] "Generic (PLEG): container finished" podID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerID="e62435a616287e294c6f15a4e191fa1cc4893f4433715cc93fa92a0db10d1e45" exitCode=0 Dec 06 08:26:59 crc kubenswrapper[4895]: I1206 08:26:59.102839 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzkq8" event={"ID":"3774b045-fabb-4f71-95c9-e1ab80718d5b","Type":"ContainerDied","Data":"e62435a616287e294c6f15a4e191fa1cc4893f4433715cc93fa92a0db10d1e45"} Dec 06 08:27:00 crc kubenswrapper[4895]: I1206 08:27:00.111276 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzkq8" event={"ID":"3774b045-fabb-4f71-95c9-e1ab80718d5b","Type":"ContainerStarted","Data":"ced59b1473c63bea4b62609d455809e281797ac2a8202f5ec26c87a615fad032"} Dec 06 08:27:00 crc kubenswrapper[4895]: I1206 08:27:00.133686 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rzkq8" podStartSLOduration=2.745111163 podStartE2EDuration="4.133655627s" podCreationTimestamp="2025-12-06 08:26:56 +0000 UTC" firstStartedPulling="2025-12-06 08:26:58.094266544 +0000 UTC m=+5380.495655414" lastFinishedPulling="2025-12-06 08:26:59.482811008 +0000 UTC m=+5381.884199878" observedRunningTime="2025-12-06 08:27:00.130528704 +0000 UTC m=+5382.531917574" watchObservedRunningTime="2025-12-06 08:27:00.133655627 +0000 UTC m=+5382.535044497" Dec 06 08:27:07 crc kubenswrapper[4895]: I1206 08:27:07.084366 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:27:07 crc kubenswrapper[4895]: I1206 08:27:07.085649 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:27:07 crc kubenswrapper[4895]: I1206 08:27:07.139304 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:27:07 crc kubenswrapper[4895]: I1206 08:27:07.201518 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:27:08 crc kubenswrapper[4895]: I1206 08:27:08.461342 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzkq8"] Dec 06 08:27:09 crc kubenswrapper[4895]: I1206 08:27:09.180900 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rzkq8" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="registry-server" containerID="cri-o://ced59b1473c63bea4b62609d455809e281797ac2a8202f5ec26c87a615fad032" gracePeriod=2 Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.196767 4895 generic.go:334] "Generic (PLEG): container finished" podID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerID="ced59b1473c63bea4b62609d455809e281797ac2a8202f5ec26c87a615fad032" exitCode=0 Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.196835 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzkq8" event={"ID":"3774b045-fabb-4f71-95c9-e1ab80718d5b","Type":"ContainerDied","Data":"ced59b1473c63bea4b62609d455809e281797ac2a8202f5ec26c87a615fad032"} Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.723326 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.772724 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhck7\" (UniqueName: \"kubernetes.io/projected/3774b045-fabb-4f71-95c9-e1ab80718d5b-kube-api-access-xhck7\") pod \"3774b045-fabb-4f71-95c9-e1ab80718d5b\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.772826 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-utilities\") pod \"3774b045-fabb-4f71-95c9-e1ab80718d5b\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.772901 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-catalog-content\") pod \"3774b045-fabb-4f71-95c9-e1ab80718d5b\" (UID: \"3774b045-fabb-4f71-95c9-e1ab80718d5b\") " Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.774279 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-utilities" (OuterVolumeSpecName: "utilities") pod "3774b045-fabb-4f71-95c9-e1ab80718d5b" (UID: "3774b045-fabb-4f71-95c9-e1ab80718d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.784997 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3774b045-fabb-4f71-95c9-e1ab80718d5b-kube-api-access-xhck7" (OuterVolumeSpecName: "kube-api-access-xhck7") pod "3774b045-fabb-4f71-95c9-e1ab80718d5b" (UID: "3774b045-fabb-4f71-95c9-e1ab80718d5b"). InnerVolumeSpecName "kube-api-access-xhck7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.794540 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3774b045-fabb-4f71-95c9-e1ab80718d5b" (UID: "3774b045-fabb-4f71-95c9-e1ab80718d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.874264 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhck7\" (UniqueName: \"kubernetes.io/projected/3774b045-fabb-4f71-95c9-e1ab80718d5b-kube-api-access-xhck7\") on node \"crc\" DevicePath \"\"" Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.874299 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:27:10 crc kubenswrapper[4895]: I1206 08:27:10.874313 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3774b045-fabb-4f71-95c9-e1ab80718d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.209374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzkq8" event={"ID":"3774b045-fabb-4f71-95c9-e1ab80718d5b","Type":"ContainerDied","Data":"d226b664c714be56a15486a5f69068eb7e8165cb94ce4a79b168128d406be8b1"} Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.209452 4895 scope.go:117] "RemoveContainer" containerID="ced59b1473c63bea4b62609d455809e281797ac2a8202f5ec26c87a615fad032" Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.209524 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzkq8" Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.234939 4895 scope.go:117] "RemoveContainer" containerID="e62435a616287e294c6f15a4e191fa1cc4893f4433715cc93fa92a0db10d1e45" Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.247805 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzkq8"] Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.254714 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzkq8"] Dec 06 08:27:11 crc kubenswrapper[4895]: I1206 08:27:11.278404 4895 scope.go:117] "RemoveContainer" containerID="d4764678bfb8ccaff652bd6393feba7755c646f4cbc9e0f42bd81f05d4d42a8a" Dec 06 08:27:12 crc kubenswrapper[4895]: I1206 08:27:12.051541 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:27:12 crc kubenswrapper[4895]: E1206 08:27:12.052170 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:27:12 crc kubenswrapper[4895]: I1206 08:27:12.067631 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" path="/var/lib/kubelet/pods/3774b045-fabb-4f71-95c9-e1ab80718d5b/volumes" Dec 06 08:27:23 crc kubenswrapper[4895]: I1206 08:27:23.050977 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:27:23 crc kubenswrapper[4895]: E1206 08:27:23.052253 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:27:35 crc kubenswrapper[4895]: I1206 08:27:35.050372 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:27:35 crc kubenswrapper[4895]: E1206 08:27:35.051165 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:27:50 crc kubenswrapper[4895]: I1206 08:27:50.050020 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:27:50 crc kubenswrapper[4895]: E1206 08:27:50.050721 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:28:04 crc kubenswrapper[4895]: I1206 08:28:04.051224 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:28:05 crc kubenswrapper[4895]: I1206 08:28:05.648410 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"3b0d89eb20cdbde9e9f345a64cd22653f2357eae30fa81206c213d523d9ec776"} Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.149229 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f"] Dec 06 08:30:00 crc kubenswrapper[4895]: E1206 08:30:00.150061 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="registry-server" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.150078 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="registry-server" Dec 06 08:30:00 crc kubenswrapper[4895]: E1206 08:30:00.150092 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="extract-utilities" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.150105 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="extract-utilities" Dec 06 08:30:00 crc kubenswrapper[4895]: E1206 08:30:00.150126 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="extract-content" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.150139 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="extract-content" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.150325 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3774b045-fabb-4f71-95c9-e1ab80718d5b" containerName="registry-server" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.150944 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.156455 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.157064 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.161539 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f"] Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.247733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/805647f2-c388-4398-bebd-1e8e86021eac-secret-volume\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.247791 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/805647f2-c388-4398-bebd-1e8e86021eac-config-volume\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.247811 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckk9n\" (UniqueName: \"kubernetes.io/projected/805647f2-c388-4398-bebd-1e8e86021eac-kube-api-access-ckk9n\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.349696 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/805647f2-c388-4398-bebd-1e8e86021eac-secret-volume\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.349744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/805647f2-c388-4398-bebd-1e8e86021eac-config-volume\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.349769 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckk9n\" (UniqueName: \"kubernetes.io/projected/805647f2-c388-4398-bebd-1e8e86021eac-kube-api-access-ckk9n\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.350712 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/805647f2-c388-4398-bebd-1e8e86021eac-config-volume\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.355886 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/805647f2-c388-4398-bebd-1e8e86021eac-secret-volume\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.368704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckk9n\" (UniqueName: \"kubernetes.io/projected/805647f2-c388-4398-bebd-1e8e86021eac-kube-api-access-ckk9n\") pod \"collect-profiles-29416830-q8b4f\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.472587 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:00 crc kubenswrapper[4895]: I1206 08:30:00.898113 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f"] Dec 06 08:30:01 crc kubenswrapper[4895]: I1206 08:30:01.616965 4895 generic.go:334] "Generic (PLEG): container finished" podID="805647f2-c388-4398-bebd-1e8e86021eac" containerID="a9423150e014fd2226b8518a87c4ed9f218df17a85588a10e64848803b805f5c" exitCode=0 Dec 06 08:30:01 crc kubenswrapper[4895]: I1206 08:30:01.617070 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" event={"ID":"805647f2-c388-4398-bebd-1e8e86021eac","Type":"ContainerDied","Data":"a9423150e014fd2226b8518a87c4ed9f218df17a85588a10e64848803b805f5c"} Dec 06 08:30:01 crc kubenswrapper[4895]: I1206 08:30:01.617203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" event={"ID":"805647f2-c388-4398-bebd-1e8e86021eac","Type":"ContainerStarted","Data":"309116b9413fcf6e554988578b77be5cc7a300ed8cb18bb2f86dde8e4b50bd45"} Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.934654 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.984099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckk9n\" (UniqueName: \"kubernetes.io/projected/805647f2-c388-4398-bebd-1e8e86021eac-kube-api-access-ckk9n\") pod \"805647f2-c388-4398-bebd-1e8e86021eac\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.984222 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/805647f2-c388-4398-bebd-1e8e86021eac-config-volume\") pod \"805647f2-c388-4398-bebd-1e8e86021eac\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.984293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/805647f2-c388-4398-bebd-1e8e86021eac-secret-volume\") pod \"805647f2-c388-4398-bebd-1e8e86021eac\" (UID: \"805647f2-c388-4398-bebd-1e8e86021eac\") " Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.985104 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805647f2-c388-4398-bebd-1e8e86021eac-config-volume" (OuterVolumeSpecName: "config-volume") pod "805647f2-c388-4398-bebd-1e8e86021eac" (UID: "805647f2-c388-4398-bebd-1e8e86021eac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.990043 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/805647f2-c388-4398-bebd-1e8e86021eac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "805647f2-c388-4398-bebd-1e8e86021eac" (UID: "805647f2-c388-4398-bebd-1e8e86021eac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:30:02 crc kubenswrapper[4895]: I1206 08:30:02.990685 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805647f2-c388-4398-bebd-1e8e86021eac-kube-api-access-ckk9n" (OuterVolumeSpecName: "kube-api-access-ckk9n") pod "805647f2-c388-4398-bebd-1e8e86021eac" (UID: "805647f2-c388-4398-bebd-1e8e86021eac"). InnerVolumeSpecName "kube-api-access-ckk9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.086070 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckk9n\" (UniqueName: \"kubernetes.io/projected/805647f2-c388-4398-bebd-1e8e86021eac-kube-api-access-ckk9n\") on node \"crc\" DevicePath \"\"" Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.086118 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/805647f2-c388-4398-bebd-1e8e86021eac-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.086131 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/805647f2-c388-4398-bebd-1e8e86021eac-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.635050 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" event={"ID":"805647f2-c388-4398-bebd-1e8e86021eac","Type":"ContainerDied","Data":"309116b9413fcf6e554988578b77be5cc7a300ed8cb18bb2f86dde8e4b50bd45"} Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.635116 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="309116b9413fcf6e554988578b77be5cc7a300ed8cb18bb2f86dde8e4b50bd45" Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.635178 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f" Dec 06 08:30:03 crc kubenswrapper[4895]: I1206 08:30:03.998342 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8"] Dec 06 08:30:04 crc kubenswrapper[4895]: I1206 08:30:04.003055 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-z4mb8"] Dec 06 08:30:04 crc kubenswrapper[4895]: I1206 08:30:04.060537 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86538cf2-c14f-4b70-9e3f-da1ea7c31973" path="/var/lib/kubelet/pods/86538cf2-c14f-4b70-9e3f-da1ea7c31973/volumes" Dec 06 08:30:13 crc kubenswrapper[4895]: I1206 08:30:13.566536 4895 scope.go:117] "RemoveContainer" containerID="35101e5e13f77236bac4c32d0a4a241e3986abc588821873c071cde69d99c654" Dec 06 08:30:29 crc kubenswrapper[4895]: I1206 08:30:29.696111 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:30:29 crc kubenswrapper[4895]: I1206 08:30:29.696754 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:30:59 crc kubenswrapper[4895]: I1206 08:30:59.696171 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:30:59 crc kubenswrapper[4895]: I1206 08:30:59.696967 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:31:29 crc kubenswrapper[4895]: I1206 08:31:29.695699 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:31:29 crc kubenswrapper[4895]: I1206 08:31:29.696299 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:31:29 crc kubenswrapper[4895]: I1206 08:31:29.696380 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:31:29 crc kubenswrapper[4895]: I1206 08:31:29.697304 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b0d89eb20cdbde9e9f345a64cd22653f2357eae30fa81206c213d523d9ec776"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:31:29 crc kubenswrapper[4895]: I1206 08:31:29.697374 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://3b0d89eb20cdbde9e9f345a64cd22653f2357eae30fa81206c213d523d9ec776" gracePeriod=600 Dec 06 08:31:30 crc kubenswrapper[4895]: I1206 08:31:30.331522 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="3b0d89eb20cdbde9e9f345a64cd22653f2357eae30fa81206c213d523d9ec776" exitCode=0 Dec 06 08:31:30 crc kubenswrapper[4895]: I1206 08:31:30.331562 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"3b0d89eb20cdbde9e9f345a64cd22653f2357eae30fa81206c213d523d9ec776"} Dec 06 08:31:30 crc kubenswrapper[4895]: I1206 08:31:30.331592 4895 scope.go:117] "RemoveContainer" containerID="72264aac5b0a0d841636bd72ff407c7204809a3cde611fcd9e7224e31fa83e8c" Dec 06 08:31:31 crc kubenswrapper[4895]: I1206 08:31:31.339828 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903"} Dec 06 08:33:59 crc kubenswrapper[4895]: I1206 08:33:59.696150 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:33:59 crc kubenswrapper[4895]: I1206 08:33:59.696895 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:34:29 crc kubenswrapper[4895]: I1206 08:34:29.696310 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:34:29 crc kubenswrapper[4895]: I1206 08:34:29.696995 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:34:59 crc kubenswrapper[4895]: I1206 08:34:59.696077 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:34:59 crc kubenswrapper[4895]: I1206 08:34:59.696887 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:34:59 crc kubenswrapper[4895]: I1206 08:34:59.696960 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:34:59 crc kubenswrapper[4895]: I1206 08:34:59.697794 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:34:59 crc kubenswrapper[4895]: I1206 08:34:59.697860 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" gracePeriod=600 Dec 06 08:34:59 crc kubenswrapper[4895]: E1206 08:34:59.887026 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:35:00 crc kubenswrapper[4895]: I1206 08:35:00.177280 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" exitCode=0 Dec 06 08:35:00 crc kubenswrapper[4895]: I1206 08:35:00.177346 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903"} Dec 06 08:35:00 crc kubenswrapper[4895]: I1206 08:35:00.177389 4895 scope.go:117] "RemoveContainer" containerID="3b0d89eb20cdbde9e9f345a64cd22653f2357eae30fa81206c213d523d9ec776" Dec 06 08:35:00 crc kubenswrapper[4895]: I1206 08:35:00.178000 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:35:00 crc kubenswrapper[4895]: E1206 08:35:00.178333 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:35:13 crc kubenswrapper[4895]: I1206 08:35:13.051571 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:35:13 crc kubenswrapper[4895]: E1206 08:35:13.052343 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.450759 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bdkh7"] Dec 06 08:35:14 crc kubenswrapper[4895]: E1206 08:35:14.451389 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805647f2-c388-4398-bebd-1e8e86021eac" containerName="collect-profiles" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.451404 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="805647f2-c388-4398-bebd-1e8e86021eac" containerName="collect-profiles" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.451604 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="805647f2-c388-4398-bebd-1e8e86021eac" containerName="collect-profiles" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.452787 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.474227 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdkh7"] Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.606923 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-utilities\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.607058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-catalog-content\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.607156 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqx7d\" (UniqueName: \"kubernetes.io/projected/609146a6-a222-458b-bd20-86b6308abf05-kube-api-access-lqx7d\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.635996 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ddqw8"] Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.637749 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.663658 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ddqw8"] Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.708465 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-utilities\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.708560 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-catalog-content\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.708604 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqx7d\" (UniqueName: \"kubernetes.io/projected/609146a6-a222-458b-bd20-86b6308abf05-kube-api-access-lqx7d\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.709330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-catalog-content\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.709463 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-utilities\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.745534 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqx7d\" (UniqueName: \"kubernetes.io/projected/609146a6-a222-458b-bd20-86b6308abf05-kube-api-access-lqx7d\") pod \"redhat-operators-bdkh7\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.809868 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45nkb\" (UniqueName: \"kubernetes.io/projected/07c32eeb-aef3-4906-8002-b79befcc6b54-kube-api-access-45nkb\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.809938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-utilities\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.809980 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-catalog-content\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.820816 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.911514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45nkb\" (UniqueName: \"kubernetes.io/projected/07c32eeb-aef3-4906-8002-b79befcc6b54-kube-api-access-45nkb\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.911622 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-utilities\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.911682 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-catalog-content\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.912194 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-utilities\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.912224 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-catalog-content\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.929330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45nkb\" (UniqueName: \"kubernetes.io/projected/07c32eeb-aef3-4906-8002-b79befcc6b54-kube-api-access-45nkb\") pod \"certified-operators-ddqw8\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:14 crc kubenswrapper[4895]: I1206 08:35:14.975072 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:15 crc kubenswrapper[4895]: I1206 08:35:15.336948 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdkh7"] Dec 06 08:35:15 crc kubenswrapper[4895]: W1206 08:35:15.501287 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07c32eeb_aef3_4906_8002_b79befcc6b54.slice/crio-fc5eed1a7c6c98a543c58b645e8daa140d086f056ad7b188dece6841ab605820 WatchSource:0}: Error finding container fc5eed1a7c6c98a543c58b645e8daa140d086f056ad7b188dece6841ab605820: Status 404 returned error can't find the container with id fc5eed1a7c6c98a543c58b645e8daa140d086f056ad7b188dece6841ab605820 Dec 06 08:35:15 crc kubenswrapper[4895]: I1206 08:35:15.502491 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ddqw8"] Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.311615 4895 generic.go:334] "Generic (PLEG): container finished" podID="609146a6-a222-458b-bd20-86b6308abf05" containerID="bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba" exitCode=0 Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.311691 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdkh7" event={"ID":"609146a6-a222-458b-bd20-86b6308abf05","Type":"ContainerDied","Data":"bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba"} Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.311722 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdkh7" event={"ID":"609146a6-a222-458b-bd20-86b6308abf05","Type":"ContainerStarted","Data":"6f3419336584fd676005d2bde956bfefba579dddf9752b6aea5fb56a58d72728"} Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.314075 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.317276 4895 generic.go:334] "Generic (PLEG): container finished" podID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerID="b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1" exitCode=0 Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.317446 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddqw8" event={"ID":"07c32eeb-aef3-4906-8002-b79befcc6b54","Type":"ContainerDied","Data":"b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1"} Dec 06 08:35:16 crc kubenswrapper[4895]: I1206 08:35:16.317511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddqw8" event={"ID":"07c32eeb-aef3-4906-8002-b79befcc6b54","Type":"ContainerStarted","Data":"fc5eed1a7c6c98a543c58b645e8daa140d086f056ad7b188dece6841ab605820"} Dec 06 08:35:20 crc kubenswrapper[4895]: I1206 08:35:20.356202 4895 generic.go:334] "Generic (PLEG): container finished" podID="609146a6-a222-458b-bd20-86b6308abf05" containerID="1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8" exitCode=0 Dec 06 08:35:20 crc kubenswrapper[4895]: I1206 08:35:20.356261 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdkh7" event={"ID":"609146a6-a222-458b-bd20-86b6308abf05","Type":"ContainerDied","Data":"1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8"} Dec 06 08:35:20 crc kubenswrapper[4895]: I1206 08:35:20.363619 4895 generic.go:334] "Generic (PLEG): container finished" podID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerID="2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a" exitCode=0 Dec 06 08:35:20 crc kubenswrapper[4895]: I1206 08:35:20.363664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddqw8" event={"ID":"07c32eeb-aef3-4906-8002-b79befcc6b54","Type":"ContainerDied","Data":"2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a"} Dec 06 08:35:22 crc kubenswrapper[4895]: I1206 08:35:22.381790 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddqw8" event={"ID":"07c32eeb-aef3-4906-8002-b79befcc6b54","Type":"ContainerStarted","Data":"5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04"} Dec 06 08:35:23 crc kubenswrapper[4895]: I1206 08:35:23.396315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdkh7" event={"ID":"609146a6-a222-458b-bd20-86b6308abf05","Type":"ContainerStarted","Data":"b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f"} Dec 06 08:35:23 crc kubenswrapper[4895]: I1206 08:35:23.423771 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bdkh7" podStartSLOduration=3.371723002 podStartE2EDuration="9.423747652s" podCreationTimestamp="2025-12-06 08:35:14 +0000 UTC" firstStartedPulling="2025-12-06 08:35:16.313790594 +0000 UTC m=+5878.715179474" lastFinishedPulling="2025-12-06 08:35:22.365815254 +0000 UTC m=+5884.767204124" observedRunningTime="2025-12-06 08:35:23.422830957 +0000 UTC m=+5885.824219827" watchObservedRunningTime="2025-12-06 08:35:23.423747652 +0000 UTC m=+5885.825136522" Dec 06 08:35:23 crc kubenswrapper[4895]: I1206 08:35:23.428426 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ddqw8" podStartSLOduration=4.225317191 podStartE2EDuration="9.428410916s" podCreationTimestamp="2025-12-06 08:35:14 +0000 UTC" firstStartedPulling="2025-12-06 08:35:16.318574572 +0000 UTC m=+5878.719963452" lastFinishedPulling="2025-12-06 08:35:21.521668307 +0000 UTC m=+5883.923057177" observedRunningTime="2025-12-06 08:35:22.417362676 +0000 UTC m=+5884.818751556" watchObservedRunningTime="2025-12-06 08:35:23.428410916 +0000 UTC m=+5885.829799786" Dec 06 08:35:24 crc kubenswrapper[4895]: I1206 08:35:24.820943 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:24 crc kubenswrapper[4895]: I1206 08:35:24.822494 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:24 crc kubenswrapper[4895]: I1206 08:35:24.975457 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:24 crc kubenswrapper[4895]: I1206 08:35:24.975529 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:25 crc kubenswrapper[4895]: I1206 08:35:25.013201 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:25 crc kubenswrapper[4895]: I1206 08:35:25.868975 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdkh7" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="registry-server" probeResult="failure" output=< Dec 06 08:35:25 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 08:35:25 crc kubenswrapper[4895]: > Dec 06 08:35:26 crc kubenswrapper[4895]: I1206 08:35:26.051341 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:35:26 crc kubenswrapper[4895]: E1206 08:35:26.051902 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:35:34 crc kubenswrapper[4895]: I1206 08:35:34.906335 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:34 crc kubenswrapper[4895]: I1206 08:35:34.993519 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:35 crc kubenswrapper[4895]: I1206 08:35:35.043391 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:35 crc kubenswrapper[4895]: I1206 08:35:35.161178 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdkh7"] Dec 06 08:35:36 crc kubenswrapper[4895]: I1206 08:35:36.508772 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bdkh7" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="registry-server" containerID="cri-o://b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f" gracePeriod=2 Dec 06 08:35:37 crc kubenswrapper[4895]: I1206 08:35:37.359550 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ddqw8"] Dec 06 08:35:37 crc kubenswrapper[4895]: I1206 08:35:37.359921 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ddqw8" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="registry-server" containerID="cri-o://5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04" gracePeriod=2 Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.054155 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.054755 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.059626 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.152273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqx7d\" (UniqueName: \"kubernetes.io/projected/609146a6-a222-458b-bd20-86b6308abf05-kube-api-access-lqx7d\") pod \"609146a6-a222-458b-bd20-86b6308abf05\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.152375 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-catalog-content\") pod \"609146a6-a222-458b-bd20-86b6308abf05\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.152445 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-utilities\") pod \"609146a6-a222-458b-bd20-86b6308abf05\" (UID: \"609146a6-a222-458b-bd20-86b6308abf05\") " Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.153430 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-utilities" (OuterVolumeSpecName: "utilities") pod "609146a6-a222-458b-bd20-86b6308abf05" (UID: "609146a6-a222-458b-bd20-86b6308abf05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.159220 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609146a6-a222-458b-bd20-86b6308abf05-kube-api-access-lqx7d" (OuterVolumeSpecName: "kube-api-access-lqx7d") pod "609146a6-a222-458b-bd20-86b6308abf05" (UID: "609146a6-a222-458b-bd20-86b6308abf05"). InnerVolumeSpecName "kube-api-access-lqx7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.254057 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.254091 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqx7d\" (UniqueName: \"kubernetes.io/projected/609146a6-a222-458b-bd20-86b6308abf05-kube-api-access-lqx7d\") on node \"crc\" DevicePath \"\"" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.260714 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.269973 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "609146a6-a222-458b-bd20-86b6308abf05" (UID: "609146a6-a222-458b-bd20-86b6308abf05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.355611 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-catalog-content\") pod \"07c32eeb-aef3-4906-8002-b79befcc6b54\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.355993 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-utilities\") pod \"07c32eeb-aef3-4906-8002-b79befcc6b54\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.356023 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45nkb\" (UniqueName: \"kubernetes.io/projected/07c32eeb-aef3-4906-8002-b79befcc6b54-kube-api-access-45nkb\") pod \"07c32eeb-aef3-4906-8002-b79befcc6b54\" (UID: \"07c32eeb-aef3-4906-8002-b79befcc6b54\") " Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.356315 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609146a6-a222-458b-bd20-86b6308abf05-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.357044 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-utilities" (OuterVolumeSpecName: "utilities") pod "07c32eeb-aef3-4906-8002-b79befcc6b54" (UID: "07c32eeb-aef3-4906-8002-b79befcc6b54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.360934 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c32eeb-aef3-4906-8002-b79befcc6b54-kube-api-access-45nkb" (OuterVolumeSpecName: "kube-api-access-45nkb") pod "07c32eeb-aef3-4906-8002-b79befcc6b54" (UID: "07c32eeb-aef3-4906-8002-b79befcc6b54"). InnerVolumeSpecName "kube-api-access-45nkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.413704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c32eeb-aef3-4906-8002-b79befcc6b54" (UID: "07c32eeb-aef3-4906-8002-b79befcc6b54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.457770 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.457817 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c32eeb-aef3-4906-8002-b79befcc6b54-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.457829 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45nkb\" (UniqueName: \"kubernetes.io/projected/07c32eeb-aef3-4906-8002-b79befcc6b54-kube-api-access-45nkb\") on node \"crc\" DevicePath \"\"" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.532198 4895 generic.go:334] "Generic (PLEG): container finished" podID="609146a6-a222-458b-bd20-86b6308abf05" containerID="b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f" exitCode=0 Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.532271 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdkh7" event={"ID":"609146a6-a222-458b-bd20-86b6308abf05","Type":"ContainerDied","Data":"b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f"} Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.532366 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdkh7" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.532400 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdkh7" event={"ID":"609146a6-a222-458b-bd20-86b6308abf05","Type":"ContainerDied","Data":"6f3419336584fd676005d2bde956bfefba579dddf9752b6aea5fb56a58d72728"} Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.532435 4895 scope.go:117] "RemoveContainer" containerID="b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.540425 4895 generic.go:334] "Generic (PLEG): container finished" podID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerID="5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04" exitCode=0 Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.540482 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddqw8" event={"ID":"07c32eeb-aef3-4906-8002-b79befcc6b54","Type":"ContainerDied","Data":"5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04"} Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.540511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddqw8" event={"ID":"07c32eeb-aef3-4906-8002-b79befcc6b54","Type":"ContainerDied","Data":"fc5eed1a7c6c98a543c58b645e8daa140d086f056ad7b188dece6841ab605820"} Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.540581 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddqw8" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.564898 4895 scope.go:117] "RemoveContainer" containerID="1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.588199 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ddqw8"] Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.594370 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ddqw8"] Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.608264 4895 scope.go:117] "RemoveContainer" containerID="bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.613703 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdkh7"] Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.619136 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bdkh7"] Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.622698 4895 scope.go:117] "RemoveContainer" containerID="b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.623053 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f\": container with ID starting with b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f not found: ID does not exist" containerID="b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.623096 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f"} err="failed to get container status \"b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f\": rpc error: code = NotFound desc = could not find container \"b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f\": container with ID starting with b4aa2cb0af23114182ec7c751e89a9aeb53a135c4474fc124f5a11918a76a06f not found: ID does not exist" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.623132 4895 scope.go:117] "RemoveContainer" containerID="1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.623508 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8\": container with ID starting with 1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8 not found: ID does not exist" containerID="1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.623576 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8"} err="failed to get container status \"1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8\": rpc error: code = NotFound desc = could not find container \"1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8\": container with ID starting with 1279f64afd8d23ec3924333b5986eded1e621d41d704e0db9c71650fe152a9b8 not found: ID does not exist" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.623618 4895 scope.go:117] "RemoveContainer" containerID="bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.623980 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba\": container with ID starting with bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba not found: ID does not exist" containerID="bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.624001 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba"} err="failed to get container status \"bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba\": rpc error: code = NotFound desc = could not find container \"bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba\": container with ID starting with bdce497395b3a6842a11785ed079b9c8c84c6b8b60297a909cf62e4ad4321fba not found: ID does not exist" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.624016 4895 scope.go:117] "RemoveContainer" containerID="5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.637718 4895 scope.go:117] "RemoveContainer" containerID="2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.652843 4895 scope.go:117] "RemoveContainer" containerID="b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.667524 4895 scope.go:117] "RemoveContainer" containerID="5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.667874 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04\": container with ID starting with 5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04 not found: ID does not exist" containerID="5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.667925 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04"} err="failed to get container status \"5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04\": rpc error: code = NotFound desc = could not find container \"5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04\": container with ID starting with 5c83bf753d384861948d95791172f4283acd6c28d1e73bc01f92b8fb05a9bf04 not found: ID does not exist" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.667952 4895 scope.go:117] "RemoveContainer" containerID="2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.668280 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a\": container with ID starting with 2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a not found: ID does not exist" containerID="2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.668308 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a"} err="failed to get container status \"2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a\": rpc error: code = NotFound desc = could not find container \"2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a\": container with ID starting with 2c9e8316d62a79e0396312b48f9ded9ac11169cf311cd4136cc3f76c609f9a9a not found: ID does not exist" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.668326 4895 scope.go:117] "RemoveContainer" containerID="b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1" Dec 06 08:35:38 crc kubenswrapper[4895]: E1206 08:35:38.668620 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1\": container with ID starting with b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1 not found: ID does not exist" containerID="b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1" Dec 06 08:35:38 crc kubenswrapper[4895]: I1206 08:35:38.668646 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1"} err="failed to get container status \"b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1\": rpc error: code = NotFound desc = could not find container \"b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1\": container with ID starting with b5adf9a202ac768230da6c35bc36187ae3f7bad4e0f329d970cfddf02f00d3e1 not found: ID does not exist" Dec 06 08:35:40 crc kubenswrapper[4895]: I1206 08:35:40.068631 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" path="/var/lib/kubelet/pods/07c32eeb-aef3-4906-8002-b79befcc6b54/volumes" Dec 06 08:35:40 crc kubenswrapper[4895]: I1206 08:35:40.069984 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609146a6-a222-458b-bd20-86b6308abf05" path="/var/lib/kubelet/pods/609146a6-a222-458b-bd20-86b6308abf05/volumes" Dec 06 08:35:53 crc kubenswrapper[4895]: I1206 08:35:53.051190 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:35:53 crc kubenswrapper[4895]: E1206 08:35:53.052222 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:36:06 crc kubenswrapper[4895]: I1206 08:36:06.054094 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:36:06 crc kubenswrapper[4895]: E1206 08:36:06.054784 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:36:17 crc kubenswrapper[4895]: I1206 08:36:17.051284 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:36:17 crc kubenswrapper[4895]: E1206 08:36:17.052574 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:36:28 crc kubenswrapper[4895]: I1206 08:36:28.058289 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:36:28 crc kubenswrapper[4895]: E1206 08:36:28.059517 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:36:42 crc kubenswrapper[4895]: I1206 08:36:42.051077 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:36:42 crc kubenswrapper[4895]: E1206 08:36:42.052225 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:36:53 crc kubenswrapper[4895]: I1206 08:36:53.051006 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:36:53 crc kubenswrapper[4895]: E1206 08:36:53.052148 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:37:06 crc kubenswrapper[4895]: I1206 08:37:06.050511 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:37:06 crc kubenswrapper[4895]: E1206 08:37:06.051720 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.681881 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w8m58"] Dec 06 08:37:07 crc kubenswrapper[4895]: E1206 08:37:07.682300 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="extract-utilities" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682319 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="extract-utilities" Dec 06 08:37:07 crc kubenswrapper[4895]: E1206 08:37:07.682356 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="registry-server" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682367 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="registry-server" Dec 06 08:37:07 crc kubenswrapper[4895]: E1206 08:37:07.682382 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="extract-content" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682395 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="extract-content" Dec 06 08:37:07 crc kubenswrapper[4895]: E1206 08:37:07.682416 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="registry-server" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682425 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="registry-server" Dec 06 08:37:07 crc kubenswrapper[4895]: E1206 08:37:07.682440 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="extract-utilities" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682450 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="extract-utilities" Dec 06 08:37:07 crc kubenswrapper[4895]: E1206 08:37:07.682495 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="extract-content" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682509 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="extract-content" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682729 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c32eeb-aef3-4906-8002-b79befcc6b54" containerName="registry-server" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.682747 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="609146a6-a222-458b-bd20-86b6308abf05" containerName="registry-server" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.684530 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.704095 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8m58"] Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.876256 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpksz\" (UniqueName: \"kubernetes.io/projected/ec3ac138-36fb-41e5-b8de-093605893807-kube-api-access-lpksz\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.876314 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-utilities\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.876336 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-catalog-content\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.977743 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpksz\" (UniqueName: \"kubernetes.io/projected/ec3ac138-36fb-41e5-b8de-093605893807-kube-api-access-lpksz\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.978185 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-utilities\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.978762 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-utilities\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.978811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-catalog-content\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.978814 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-catalog-content\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:07 crc kubenswrapper[4895]: I1206 08:37:07.996433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpksz\" (UniqueName: \"kubernetes.io/projected/ec3ac138-36fb-41e5-b8de-093605893807-kube-api-access-lpksz\") pod \"community-operators-w8m58\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:08 crc kubenswrapper[4895]: I1206 08:37:08.004281 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:08 crc kubenswrapper[4895]: I1206 08:37:08.281053 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8m58"] Dec 06 08:37:09 crc kubenswrapper[4895]: I1206 08:37:09.304612 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec3ac138-36fb-41e5-b8de-093605893807" containerID="1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2" exitCode=0 Dec 06 08:37:09 crc kubenswrapper[4895]: I1206 08:37:09.304706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerDied","Data":"1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2"} Dec 06 08:37:09 crc kubenswrapper[4895]: I1206 08:37:09.305123 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerStarted","Data":"0f688a93ef85289cd5349ce629ae6bc1709ba3f553a7a9ba652cfbc47b8d7bae"} Dec 06 08:37:10 crc kubenswrapper[4895]: I1206 08:37:10.315164 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerStarted","Data":"c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a"} Dec 06 08:37:11 crc kubenswrapper[4895]: I1206 08:37:11.329862 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec3ac138-36fb-41e5-b8de-093605893807" containerID="c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a" exitCode=0 Dec 06 08:37:11 crc kubenswrapper[4895]: I1206 08:37:11.329921 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerDied","Data":"c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a"} Dec 06 08:37:12 crc kubenswrapper[4895]: I1206 08:37:12.346384 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerStarted","Data":"5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87"} Dec 06 08:37:12 crc kubenswrapper[4895]: I1206 08:37:12.372820 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w8m58" podStartSLOduration=2.9203521500000003 podStartE2EDuration="5.372801827s" podCreationTimestamp="2025-12-06 08:37:07 +0000 UTC" firstStartedPulling="2025-12-06 08:37:09.307087052 +0000 UTC m=+5991.708475922" lastFinishedPulling="2025-12-06 08:37:11.759536719 +0000 UTC m=+5994.160925599" observedRunningTime="2025-12-06 08:37:12.364456063 +0000 UTC m=+5994.765844943" watchObservedRunningTime="2025-12-06 08:37:12.372801827 +0000 UTC m=+5994.774190707" Dec 06 08:37:18 crc kubenswrapper[4895]: I1206 08:37:18.005529 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:18 crc kubenswrapper[4895]: I1206 08:37:18.006172 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:18 crc kubenswrapper[4895]: I1206 08:37:18.076961 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:18 crc kubenswrapper[4895]: I1206 08:37:18.432462 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:18 crc kubenswrapper[4895]: I1206 08:37:18.488732 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8m58"] Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.050538 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:37:20 crc kubenswrapper[4895]: E1206 08:37:20.050820 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.405385 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w8m58" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="registry-server" containerID="cri-o://5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87" gracePeriod=2 Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.871974 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.993184 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-catalog-content\") pod \"ec3ac138-36fb-41e5-b8de-093605893807\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.993342 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpksz\" (UniqueName: \"kubernetes.io/projected/ec3ac138-36fb-41e5-b8de-093605893807-kube-api-access-lpksz\") pod \"ec3ac138-36fb-41e5-b8de-093605893807\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.993416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-utilities\") pod \"ec3ac138-36fb-41e5-b8de-093605893807\" (UID: \"ec3ac138-36fb-41e5-b8de-093605893807\") " Dec 06 08:37:20 crc kubenswrapper[4895]: I1206 08:37:20.995361 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-utilities" (OuterVolumeSpecName: "utilities") pod "ec3ac138-36fb-41e5-b8de-093605893807" (UID: "ec3ac138-36fb-41e5-b8de-093605893807"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.000405 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3ac138-36fb-41e5-b8de-093605893807-kube-api-access-lpksz" (OuterVolumeSpecName: "kube-api-access-lpksz") pod "ec3ac138-36fb-41e5-b8de-093605893807" (UID: "ec3ac138-36fb-41e5-b8de-093605893807"). InnerVolumeSpecName "kube-api-access-lpksz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.063074 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec3ac138-36fb-41e5-b8de-093605893807" (UID: "ec3ac138-36fb-41e5-b8de-093605893807"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.095540 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.095586 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpksz\" (UniqueName: \"kubernetes.io/projected/ec3ac138-36fb-41e5-b8de-093605893807-kube-api-access-lpksz\") on node \"crc\" DevicePath \"\"" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.095602 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3ac138-36fb-41e5-b8de-093605893807-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.418731 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec3ac138-36fb-41e5-b8de-093605893807" containerID="5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87" exitCode=0 Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.418804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerDied","Data":"5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87"} Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.418825 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8m58" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.419495 4895 scope.go:117] "RemoveContainer" containerID="5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.419392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8m58" event={"ID":"ec3ac138-36fb-41e5-b8de-093605893807","Type":"ContainerDied","Data":"0f688a93ef85289cd5349ce629ae6bc1709ba3f553a7a9ba652cfbc47b8d7bae"} Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.439070 4895 scope.go:117] "RemoveContainer" containerID="c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.457552 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8m58"] Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.462375 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w8m58"] Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.469678 4895 scope.go:117] "RemoveContainer" containerID="1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.499719 4895 scope.go:117] "RemoveContainer" containerID="5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87" Dec 06 08:37:21 crc kubenswrapper[4895]: E1206 08:37:21.500126 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87\": container with ID starting with 5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87 not found: ID does not exist" containerID="5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.500157 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87"} err="failed to get container status \"5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87\": rpc error: code = NotFound desc = could not find container \"5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87\": container with ID starting with 5af68b787661a6bbf9675dc5accc44c33c6c380ec421f7a6d430c9c5e1e29c87 not found: ID does not exist" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.500179 4895 scope.go:117] "RemoveContainer" containerID="c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a" Dec 06 08:37:21 crc kubenswrapper[4895]: E1206 08:37:21.500440 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a\": container with ID starting with c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a not found: ID does not exist" containerID="c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.500465 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a"} err="failed to get container status \"c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a\": rpc error: code = NotFound desc = could not find container \"c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a\": container with ID starting with c675af0066e457c4f037c8e3a9797383596a6fd0bdd6959bfc240c89d7d3f54a not found: ID does not exist" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.500535 4895 scope.go:117] "RemoveContainer" containerID="1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2" Dec 06 08:37:21 crc kubenswrapper[4895]: E1206 08:37:21.500949 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2\": container with ID starting with 1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2 not found: ID does not exist" containerID="1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2" Dec 06 08:37:21 crc kubenswrapper[4895]: I1206 08:37:21.501020 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2"} err="failed to get container status \"1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2\": rpc error: code = NotFound desc = could not find container \"1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2\": container with ID starting with 1e7553e38f0cdfdda8d6d549ceb52ae6e7574f994bd698464b237518726cc2b2 not found: ID does not exist" Dec 06 08:37:22 crc kubenswrapper[4895]: I1206 08:37:22.068080 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3ac138-36fb-41e5-b8de-093605893807" path="/var/lib/kubelet/pods/ec3ac138-36fb-41e5-b8de-093605893807/volumes" Dec 06 08:37:33 crc kubenswrapper[4895]: I1206 08:37:33.051166 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:37:33 crc kubenswrapper[4895]: E1206 08:37:33.052513 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.051001 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:37:44 crc kubenswrapper[4895]: E1206 08:37:44.052168 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.981554 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j7d2d"] Dec 06 08:37:44 crc kubenswrapper[4895]: E1206 08:37:44.981930 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="extract-utilities" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.981946 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="extract-utilities" Dec 06 08:37:44 crc kubenswrapper[4895]: E1206 08:37:44.981977 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="extract-content" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.981987 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="extract-content" Dec 06 08:37:44 crc kubenswrapper[4895]: E1206 08:37:44.982009 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="registry-server" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.982018 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="registry-server" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.982165 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3ac138-36fb-41e5-b8de-093605893807" containerName="registry-server" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.983105 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:44 crc kubenswrapper[4895]: I1206 08:37:44.992531 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7d2d"] Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.093306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-utilities\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.093615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/ff12b8b8-3c0e-4555-b869-29ca303a623f-kube-api-access-82qlz\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.093690 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-catalog-content\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.194768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/ff12b8b8-3c0e-4555-b869-29ca303a623f-kube-api-access-82qlz\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.195139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-catalog-content\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.195227 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-utilities\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.195657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-utilities\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.195957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-catalog-content\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.222934 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/ff12b8b8-3c0e-4555-b869-29ca303a623f-kube-api-access-82qlz\") pod \"redhat-marketplace-j7d2d\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.311827 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:45 crc kubenswrapper[4895]: I1206 08:37:45.785760 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7d2d"] Dec 06 08:37:46 crc kubenswrapper[4895]: I1206 08:37:46.629779 4895 generic.go:334] "Generic (PLEG): container finished" podID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerID="a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96" exitCode=0 Dec 06 08:37:46 crc kubenswrapper[4895]: I1206 08:37:46.629860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7d2d" event={"ID":"ff12b8b8-3c0e-4555-b869-29ca303a623f","Type":"ContainerDied","Data":"a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96"} Dec 06 08:37:46 crc kubenswrapper[4895]: I1206 08:37:46.630178 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7d2d" event={"ID":"ff12b8b8-3c0e-4555-b869-29ca303a623f","Type":"ContainerStarted","Data":"f461ec6800dc6d9fe4d71fad785d7bd17cdb7ce63f30b0f1725d5d3e28ca5dfc"} Dec 06 08:37:47 crc kubenswrapper[4895]: I1206 08:37:47.643252 4895 generic.go:334] "Generic (PLEG): container finished" podID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerID="567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f" exitCode=0 Dec 06 08:37:47 crc kubenswrapper[4895]: I1206 08:37:47.643300 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7d2d" event={"ID":"ff12b8b8-3c0e-4555-b869-29ca303a623f","Type":"ContainerDied","Data":"567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f"} Dec 06 08:37:48 crc kubenswrapper[4895]: I1206 08:37:48.651947 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7d2d" event={"ID":"ff12b8b8-3c0e-4555-b869-29ca303a623f","Type":"ContainerStarted","Data":"eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20"} Dec 06 08:37:48 crc kubenswrapper[4895]: I1206 08:37:48.675736 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j7d2d" podStartSLOduration=3.161933038 podStartE2EDuration="4.675713924s" podCreationTimestamp="2025-12-06 08:37:44 +0000 UTC" firstStartedPulling="2025-12-06 08:37:46.632421514 +0000 UTC m=+6029.033810414" lastFinishedPulling="2025-12-06 08:37:48.14620242 +0000 UTC m=+6030.547591300" observedRunningTime="2025-12-06 08:37:48.670494324 +0000 UTC m=+6031.071883194" watchObservedRunningTime="2025-12-06 08:37:48.675713924 +0000 UTC m=+6031.077102794" Dec 06 08:37:55 crc kubenswrapper[4895]: I1206 08:37:55.312675 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:55 crc kubenswrapper[4895]: I1206 08:37:55.313012 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:55 crc kubenswrapper[4895]: I1206 08:37:55.367911 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:55 crc kubenswrapper[4895]: I1206 08:37:55.784854 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:55 crc kubenswrapper[4895]: I1206 08:37:55.836449 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7d2d"] Dec 06 08:37:57 crc kubenswrapper[4895]: I1206 08:37:57.722896 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j7d2d" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="registry-server" containerID="cri-o://eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20" gracePeriod=2 Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.665424 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.736525 4895 generic.go:334] "Generic (PLEG): container finished" podID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerID="eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20" exitCode=0 Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.736587 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7d2d" event={"ID":"ff12b8b8-3c0e-4555-b869-29ca303a623f","Type":"ContainerDied","Data":"eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20"} Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.736647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7d2d" event={"ID":"ff12b8b8-3c0e-4555-b869-29ca303a623f","Type":"ContainerDied","Data":"f461ec6800dc6d9fe4d71fad785d7bd17cdb7ce63f30b0f1725d5d3e28ca5dfc"} Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.736650 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7d2d" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.736689 4895 scope.go:117] "RemoveContainer" containerID="eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.764069 4895 scope.go:117] "RemoveContainer" containerID="567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.779251 4895 scope.go:117] "RemoveContainer" containerID="a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.823869 4895 scope.go:117] "RemoveContainer" containerID="eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20" Dec 06 08:37:58 crc kubenswrapper[4895]: E1206 08:37:58.824296 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20\": container with ID starting with eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20 not found: ID does not exist" containerID="eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824339 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20"} err="failed to get container status \"eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20\": rpc error: code = NotFound desc = could not find container \"eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20\": container with ID starting with eb64960d78f403d3e8211a98f255439b12cd8cdd0bca6fc39e0cea24ed808d20 not found: ID does not exist" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824364 4895 scope.go:117] "RemoveContainer" containerID="567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824686 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/ff12b8b8-3c0e-4555-b869-29ca303a623f-kube-api-access-82qlz\") pod \"ff12b8b8-3c0e-4555-b869-29ca303a623f\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824798 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-utilities\") pod \"ff12b8b8-3c0e-4555-b869-29ca303a623f\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824848 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-catalog-content\") pod \"ff12b8b8-3c0e-4555-b869-29ca303a623f\" (UID: \"ff12b8b8-3c0e-4555-b869-29ca303a623f\") " Dec 06 08:37:58 crc kubenswrapper[4895]: E1206 08:37:58.824835 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f\": container with ID starting with 567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f not found: ID does not exist" containerID="567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824896 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f"} err="failed to get container status \"567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f\": rpc error: code = NotFound desc = could not find container \"567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f\": container with ID starting with 567ac61dda3bac75f6dd46a8829d4f7a9baf2179c8ca663cd7abebd2f3a5ca9f not found: ID does not exist" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.824941 4895 scope.go:117] "RemoveContainer" containerID="a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96" Dec 06 08:37:58 crc kubenswrapper[4895]: E1206 08:37:58.825396 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96\": container with ID starting with a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96 not found: ID does not exist" containerID="a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.825421 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96"} err="failed to get container status \"a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96\": rpc error: code = NotFound desc = could not find container \"a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96\": container with ID starting with a45bfae4f080cf088d1534d76e4a531e3120735f3159d82a57980fb73c963c96 not found: ID does not exist" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.825880 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-utilities" (OuterVolumeSpecName: "utilities") pod "ff12b8b8-3c0e-4555-b869-29ca303a623f" (UID: "ff12b8b8-3c0e-4555-b869-29ca303a623f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.832818 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff12b8b8-3c0e-4555-b869-29ca303a623f-kube-api-access-82qlz" (OuterVolumeSpecName: "kube-api-access-82qlz") pod "ff12b8b8-3c0e-4555-b869-29ca303a623f" (UID: "ff12b8b8-3c0e-4555-b869-29ca303a623f"). InnerVolumeSpecName "kube-api-access-82qlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.850589 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff12b8b8-3c0e-4555-b869-29ca303a623f" (UID: "ff12b8b8-3c0e-4555-b869-29ca303a623f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.926428 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.926518 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff12b8b8-3c0e-4555-b869-29ca303a623f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:37:58 crc kubenswrapper[4895]: I1206 08:37:58.926537 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/ff12b8b8-3c0e-4555-b869-29ca303a623f-kube-api-access-82qlz\") on node \"crc\" DevicePath \"\"" Dec 06 08:37:59 crc kubenswrapper[4895]: I1206 08:37:59.051530 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:37:59 crc kubenswrapper[4895]: E1206 08:37:59.052132 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:37:59 crc kubenswrapper[4895]: I1206 08:37:59.102100 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7d2d"] Dec 06 08:37:59 crc kubenswrapper[4895]: I1206 08:37:59.110554 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7d2d"] Dec 06 08:38:00 crc kubenswrapper[4895]: I1206 08:38:00.058973 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" path="/var/lib/kubelet/pods/ff12b8b8-3c0e-4555-b869-29ca303a623f/volumes" Dec 06 08:38:12 crc kubenswrapper[4895]: I1206 08:38:12.051607 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:38:12 crc kubenswrapper[4895]: E1206 08:38:12.052267 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:38:25 crc kubenswrapper[4895]: I1206 08:38:25.050968 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:38:25 crc kubenswrapper[4895]: E1206 08:38:25.053086 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:38:37 crc kubenswrapper[4895]: I1206 08:38:37.051451 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:38:37 crc kubenswrapper[4895]: E1206 08:38:37.053207 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:38:51 crc kubenswrapper[4895]: I1206 08:38:51.050052 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:38:51 crc kubenswrapper[4895]: E1206 08:38:51.050670 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:39:02 crc kubenswrapper[4895]: I1206 08:39:02.050993 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:39:02 crc kubenswrapper[4895]: E1206 08:39:02.051605 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:39:17 crc kubenswrapper[4895]: I1206 08:39:17.051213 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:39:17 crc kubenswrapper[4895]: E1206 08:39:17.052589 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:39:32 crc kubenswrapper[4895]: I1206 08:39:32.050672 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:39:32 crc kubenswrapper[4895]: E1206 08:39:32.051920 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:39:44 crc kubenswrapper[4895]: I1206 08:39:44.051551 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:39:44 crc kubenswrapper[4895]: E1206 08:39:44.052419 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:39:59 crc kubenswrapper[4895]: I1206 08:39:59.050923 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:39:59 crc kubenswrapper[4895]: E1206 08:39:59.051956 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:40:13 crc kubenswrapper[4895]: I1206 08:40:13.050892 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:40:13 crc kubenswrapper[4895]: I1206 08:40:13.944804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"a00fec192193020904dedce9aa39c5d891233a290dfd499a30b7f87d36b50b8c"} Dec 06 08:42:29 crc kubenswrapper[4895]: I1206 08:42:29.696094 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:42:29 crc kubenswrapper[4895]: I1206 08:42:29.696879 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:42:59 crc kubenswrapper[4895]: I1206 08:42:59.695566 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:42:59 crc kubenswrapper[4895]: I1206 08:42:59.696416 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.090945 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-ph9wt"] Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.099515 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-ph9wt"] Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.207139 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kvljb"] Dec 06 08:43:07 crc kubenswrapper[4895]: E1206 08:43:07.207439 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="registry-server" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.207459 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="registry-server" Dec 06 08:43:07 crc kubenswrapper[4895]: E1206 08:43:07.207486 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="extract-utilities" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.207494 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="extract-utilities" Dec 06 08:43:07 crc kubenswrapper[4895]: E1206 08:43:07.207527 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="extract-content" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.207536 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="extract-content" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.207721 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff12b8b8-3c0e-4555-b869-29ca303a623f" containerName="registry-server" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.208292 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.211873 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.212169 4895 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nsgz7" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.212213 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.212225 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.218281 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kvljb"] Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.398950 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxt7\" (UniqueName: \"kubernetes.io/projected/56d632b6-e023-4a03-a20f-b0eadb06cbd7-kube-api-access-5pxt7\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.399006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/56d632b6-e023-4a03-a20f-b0eadb06cbd7-node-mnt\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.399080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/56d632b6-e023-4a03-a20f-b0eadb06cbd7-crc-storage\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.500373 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxt7\" (UniqueName: \"kubernetes.io/projected/56d632b6-e023-4a03-a20f-b0eadb06cbd7-kube-api-access-5pxt7\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.500444 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/56d632b6-e023-4a03-a20f-b0eadb06cbd7-node-mnt\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.500847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/56d632b6-e023-4a03-a20f-b0eadb06cbd7-node-mnt\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.500538 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/56d632b6-e023-4a03-a20f-b0eadb06cbd7-crc-storage\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.501595 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/56d632b6-e023-4a03-a20f-b0eadb06cbd7-crc-storage\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.521619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxt7\" (UniqueName: \"kubernetes.io/projected/56d632b6-e023-4a03-a20f-b0eadb06cbd7-kube-api-access-5pxt7\") pod \"crc-storage-crc-kvljb\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.557820 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.980941 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kvljb"] Dec 06 08:43:07 crc kubenswrapper[4895]: I1206 08:43:07.986352 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:43:08 crc kubenswrapper[4895]: I1206 08:43:08.061888 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710d7eb5-2fea-47f4-9ed1-954731bda21a" path="/var/lib/kubelet/pods/710d7eb5-2fea-47f4-9ed1-954731bda21a/volumes" Dec 06 08:43:08 crc kubenswrapper[4895]: I1206 08:43:08.625659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kvljb" event={"ID":"56d632b6-e023-4a03-a20f-b0eadb06cbd7","Type":"ContainerStarted","Data":"5d141ed745365b438209bf5f157ccdf273afac5ecf64fb33f8d1354002a62bf5"} Dec 06 08:43:09 crc kubenswrapper[4895]: I1206 08:43:09.642432 4895 generic.go:334] "Generic (PLEG): container finished" podID="56d632b6-e023-4a03-a20f-b0eadb06cbd7" containerID="7088bbd0c9484e70e8e87619c543636fd8fa8b73318558718ee148aa91c4b08a" exitCode=0 Dec 06 08:43:09 crc kubenswrapper[4895]: I1206 08:43:09.643674 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kvljb" event={"ID":"56d632b6-e023-4a03-a20f-b0eadb06cbd7","Type":"ContainerDied","Data":"7088bbd0c9484e70e8e87619c543636fd8fa8b73318558718ee148aa91c4b08a"} Dec 06 08:43:10 crc kubenswrapper[4895]: I1206 08:43:10.979412 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.061969 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxt7\" (UniqueName: \"kubernetes.io/projected/56d632b6-e023-4a03-a20f-b0eadb06cbd7-kube-api-access-5pxt7\") pod \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.062023 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/56d632b6-e023-4a03-a20f-b0eadb06cbd7-crc-storage\") pod \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.062082 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/56d632b6-e023-4a03-a20f-b0eadb06cbd7-node-mnt\") pod \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\" (UID: \"56d632b6-e023-4a03-a20f-b0eadb06cbd7\") " Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.062292 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d632b6-e023-4a03-a20f-b0eadb06cbd7-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "56d632b6-e023-4a03-a20f-b0eadb06cbd7" (UID: "56d632b6-e023-4a03-a20f-b0eadb06cbd7"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.074760 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d632b6-e023-4a03-a20f-b0eadb06cbd7-kube-api-access-5pxt7" (OuterVolumeSpecName: "kube-api-access-5pxt7") pod "56d632b6-e023-4a03-a20f-b0eadb06cbd7" (UID: "56d632b6-e023-4a03-a20f-b0eadb06cbd7"). InnerVolumeSpecName "kube-api-access-5pxt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.081115 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d632b6-e023-4a03-a20f-b0eadb06cbd7-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "56d632b6-e023-4a03-a20f-b0eadb06cbd7" (UID: "56d632b6-e023-4a03-a20f-b0eadb06cbd7"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.163153 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxt7\" (UniqueName: \"kubernetes.io/projected/56d632b6-e023-4a03-a20f-b0eadb06cbd7-kube-api-access-5pxt7\") on node \"crc\" DevicePath \"\"" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.163190 4895 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/56d632b6-e023-4a03-a20f-b0eadb06cbd7-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.163200 4895 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/56d632b6-e023-4a03-a20f-b0eadb06cbd7-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.658874 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kvljb" event={"ID":"56d632b6-e023-4a03-a20f-b0eadb06cbd7","Type":"ContainerDied","Data":"5d141ed745365b438209bf5f157ccdf273afac5ecf64fb33f8d1354002a62bf5"} Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.658932 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d141ed745365b438209bf5f157ccdf273afac5ecf64fb33f8d1354002a62bf5" Dec 06 08:43:11 crc kubenswrapper[4895]: I1206 08:43:11.658934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kvljb" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.115142 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kvljb"] Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.123691 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kvljb"] Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.237655 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mtx5h"] Dec 06 08:43:13 crc kubenswrapper[4895]: E1206 08:43:13.238179 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d632b6-e023-4a03-a20f-b0eadb06cbd7" containerName="storage" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.238244 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d632b6-e023-4a03-a20f-b0eadb06cbd7" containerName="storage" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.238561 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d632b6-e023-4a03-a20f-b0eadb06cbd7" containerName="storage" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.239169 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.244716 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.244892 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.245015 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.245304 4895 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nsgz7" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.254927 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mtx5h"] Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.295169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/453f6eed-83c8-45b2-84b2-58fe5954ce66-node-mnt\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.295223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmnk\" (UniqueName: \"kubernetes.io/projected/453f6eed-83c8-45b2-84b2-58fe5954ce66-kube-api-access-2tmnk\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.295452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/453f6eed-83c8-45b2-84b2-58fe5954ce66-crc-storage\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.396543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/453f6eed-83c8-45b2-84b2-58fe5954ce66-node-mnt\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.396615 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tmnk\" (UniqueName: \"kubernetes.io/projected/453f6eed-83c8-45b2-84b2-58fe5954ce66-kube-api-access-2tmnk\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.396664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/453f6eed-83c8-45b2-84b2-58fe5954ce66-crc-storage\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.396888 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/453f6eed-83c8-45b2-84b2-58fe5954ce66-node-mnt\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.397613 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/453f6eed-83c8-45b2-84b2-58fe5954ce66-crc-storage\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.418438 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tmnk\" (UniqueName: \"kubernetes.io/projected/453f6eed-83c8-45b2-84b2-58fe5954ce66-kube-api-access-2tmnk\") pod \"crc-storage-crc-mtx5h\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.562683 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:13 crc kubenswrapper[4895]: I1206 08:43:13.886408 4895 scope.go:117] "RemoveContainer" containerID="ef561be9e9d7bd59c84adeee5c6906541c78ecc9cce4b63ee4f5c7fbf36f5ad4" Dec 06 08:43:14 crc kubenswrapper[4895]: I1206 08:43:14.059332 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d632b6-e023-4a03-a20f-b0eadb06cbd7" path="/var/lib/kubelet/pods/56d632b6-e023-4a03-a20f-b0eadb06cbd7/volumes" Dec 06 08:43:14 crc kubenswrapper[4895]: I1206 08:43:14.060872 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mtx5h"] Dec 06 08:43:14 crc kubenswrapper[4895]: I1206 08:43:14.695344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mtx5h" event={"ID":"453f6eed-83c8-45b2-84b2-58fe5954ce66","Type":"ContainerStarted","Data":"52a804f501560ab0ed09685a23375254f8626690d98fcee4d33cd878147e247b"} Dec 06 08:43:14 crc kubenswrapper[4895]: I1206 08:43:14.696419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mtx5h" event={"ID":"453f6eed-83c8-45b2-84b2-58fe5954ce66","Type":"ContainerStarted","Data":"29f04fc3d5fd2b7c8cb78f6dced5c09ffc8670e96b5bd3f30f78a25e908e7942"} Dec 06 08:43:14 crc kubenswrapper[4895]: I1206 08:43:14.708690 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-mtx5h" podStartSLOduration=1.267101388 podStartE2EDuration="1.708672035s" podCreationTimestamp="2025-12-06 08:43:13 +0000 UTC" firstStartedPulling="2025-12-06 08:43:14.068827684 +0000 UTC m=+6356.470216564" lastFinishedPulling="2025-12-06 08:43:14.510398341 +0000 UTC m=+6356.911787211" observedRunningTime="2025-12-06 08:43:14.70845844 +0000 UTC m=+6357.109847310" watchObservedRunningTime="2025-12-06 08:43:14.708672035 +0000 UTC m=+6357.110060905" Dec 06 08:43:15 crc kubenswrapper[4895]: I1206 08:43:15.705733 4895 generic.go:334] "Generic (PLEG): container finished" podID="453f6eed-83c8-45b2-84b2-58fe5954ce66" containerID="52a804f501560ab0ed09685a23375254f8626690d98fcee4d33cd878147e247b" exitCode=0 Dec 06 08:43:15 crc kubenswrapper[4895]: I1206 08:43:15.705781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mtx5h" event={"ID":"453f6eed-83c8-45b2-84b2-58fe5954ce66","Type":"ContainerDied","Data":"52a804f501560ab0ed09685a23375254f8626690d98fcee4d33cd878147e247b"} Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.015315 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.145302 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tmnk\" (UniqueName: \"kubernetes.io/projected/453f6eed-83c8-45b2-84b2-58fe5954ce66-kube-api-access-2tmnk\") pod \"453f6eed-83c8-45b2-84b2-58fe5954ce66\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.145700 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/453f6eed-83c8-45b2-84b2-58fe5954ce66-crc-storage\") pod \"453f6eed-83c8-45b2-84b2-58fe5954ce66\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.145780 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/453f6eed-83c8-45b2-84b2-58fe5954ce66-node-mnt\") pod \"453f6eed-83c8-45b2-84b2-58fe5954ce66\" (UID: \"453f6eed-83c8-45b2-84b2-58fe5954ce66\") " Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.146046 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/453f6eed-83c8-45b2-84b2-58fe5954ce66-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "453f6eed-83c8-45b2-84b2-58fe5954ce66" (UID: "453f6eed-83c8-45b2-84b2-58fe5954ce66"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.146202 4895 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/453f6eed-83c8-45b2-84b2-58fe5954ce66-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.150658 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453f6eed-83c8-45b2-84b2-58fe5954ce66-kube-api-access-2tmnk" (OuterVolumeSpecName: "kube-api-access-2tmnk") pod "453f6eed-83c8-45b2-84b2-58fe5954ce66" (UID: "453f6eed-83c8-45b2-84b2-58fe5954ce66"). InnerVolumeSpecName "kube-api-access-2tmnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.166110 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/453f6eed-83c8-45b2-84b2-58fe5954ce66-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "453f6eed-83c8-45b2-84b2-58fe5954ce66" (UID: "453f6eed-83c8-45b2-84b2-58fe5954ce66"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.247145 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tmnk\" (UniqueName: \"kubernetes.io/projected/453f6eed-83c8-45b2-84b2-58fe5954ce66-kube-api-access-2tmnk\") on node \"crc\" DevicePath \"\"" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.247418 4895 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/453f6eed-83c8-45b2-84b2-58fe5954ce66-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.721374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mtx5h" event={"ID":"453f6eed-83c8-45b2-84b2-58fe5954ce66","Type":"ContainerDied","Data":"29f04fc3d5fd2b7c8cb78f6dced5c09ffc8670e96b5bd3f30f78a25e908e7942"} Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.721428 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f04fc3d5fd2b7c8cb78f6dced5c09ffc8670e96b5bd3f30f78a25e908e7942" Dec 06 08:43:17 crc kubenswrapper[4895]: I1206 08:43:17.721441 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mtx5h" Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.695927 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.696850 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.696909 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.697592 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a00fec192193020904dedce9aa39c5d891233a290dfd499a30b7f87d36b50b8c"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.697658 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://a00fec192193020904dedce9aa39c5d891233a290dfd499a30b7f87d36b50b8c" gracePeriod=600 Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.839259 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="a00fec192193020904dedce9aa39c5d891233a290dfd499a30b7f87d36b50b8c" exitCode=0 Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.839295 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"a00fec192193020904dedce9aa39c5d891233a290dfd499a30b7f87d36b50b8c"} Dec 06 08:43:29 crc kubenswrapper[4895]: I1206 08:43:29.839337 4895 scope.go:117] "RemoveContainer" containerID="0758cff840c1ef9a12656eb4002c159842024700a2f0a3a720f115ddfe30c903" Dec 06 08:43:30 crc kubenswrapper[4895]: I1206 08:43:30.850895 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684"} Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.150970 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6"] Dec 06 08:45:00 crc kubenswrapper[4895]: E1206 08:45:00.151971 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453f6eed-83c8-45b2-84b2-58fe5954ce66" containerName="storage" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.152007 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="453f6eed-83c8-45b2-84b2-58fe5954ce66" containerName="storage" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.152322 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="453f6eed-83c8-45b2-84b2-58fe5954ce66" containerName="storage" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.153051 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.155346 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.155617 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.158542 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6"] Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.303707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hkt\" (UniqueName: \"kubernetes.io/projected/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-kube-api-access-w6hkt\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.304028 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-secret-volume\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.304224 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-config-volume\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.406056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-config-volume\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.406138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hkt\" (UniqueName: \"kubernetes.io/projected/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-kube-api-access-w6hkt\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.406614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-secret-volume\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.407434 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-config-volume\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.413619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-secret-volume\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.431150 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hkt\" (UniqueName: \"kubernetes.io/projected/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-kube-api-access-w6hkt\") pod \"collect-profiles-29416845-kcsp6\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.474327 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:00 crc kubenswrapper[4895]: I1206 08:45:00.896494 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6"] Dec 06 08:45:01 crc kubenswrapper[4895]: I1206 08:45:01.563632 4895 generic.go:334] "Generic (PLEG): container finished" podID="82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" containerID="eaea10e0575ae5c82698586f31e98b77e0886d804fccdcc61a7d323f5e8d8191" exitCode=0 Dec 06 08:45:01 crc kubenswrapper[4895]: I1206 08:45:01.563758 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" event={"ID":"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7","Type":"ContainerDied","Data":"eaea10e0575ae5c82698586f31e98b77e0886d804fccdcc61a7d323f5e8d8191"} Dec 06 08:45:01 crc kubenswrapper[4895]: I1206 08:45:01.563983 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" event={"ID":"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7","Type":"ContainerStarted","Data":"4d14aab40af259563406e20e7331dcb47dfe2c04589fb3a5066f04fe45f9c201"} Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.848899 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.951869 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-config-volume\") pod \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.951963 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6hkt\" (UniqueName: \"kubernetes.io/projected/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-kube-api-access-w6hkt\") pod \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.951984 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-secret-volume\") pod \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\" (UID: \"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7\") " Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.953120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" (UID: "82be63b9-7efd-46d2-88c6-e1fd8f2b58f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.957677 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-kube-api-access-w6hkt" (OuterVolumeSpecName: "kube-api-access-w6hkt") pod "82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" (UID: "82be63b9-7efd-46d2-88c6-e1fd8f2b58f7"). InnerVolumeSpecName "kube-api-access-w6hkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:45:02 crc kubenswrapper[4895]: I1206 08:45:02.958389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" (UID: "82be63b9-7efd-46d2-88c6-e1fd8f2b58f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.054685 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.054744 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6hkt\" (UniqueName: \"kubernetes.io/projected/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-kube-api-access-w6hkt\") on node \"crc\" DevicePath \"\"" Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.054773 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.580891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" event={"ID":"82be63b9-7efd-46d2-88c6-e1fd8f2b58f7","Type":"ContainerDied","Data":"4d14aab40af259563406e20e7331dcb47dfe2c04589fb3a5066f04fe45f9c201"} Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.581221 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d14aab40af259563406e20e7331dcb47dfe2c04589fb3a5066f04fe45f9c201" Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.580971 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6" Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.916063 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2"] Dec 06 08:45:03 crc kubenswrapper[4895]: I1206 08:45:03.922095 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-9sww2"] Dec 06 08:45:04 crc kubenswrapper[4895]: I1206 08:45:04.060443 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435028a4-5fa1-4981-a7fb-37615dfd3865" path="/var/lib/kubelet/pods/435028a4-5fa1-4981-a7fb-37615dfd3865/volumes" Dec 06 08:45:13 crc kubenswrapper[4895]: I1206 08:45:13.981152 4895 scope.go:117] "RemoveContainer" containerID="d7781bcd9c0bb3a8617f0ae2bf8ab16251137289f05ddf431aef4ca0094adfa2" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.835669 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj"] Dec 06 08:45:23 crc kubenswrapper[4895]: E1206 08:45:23.836588 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" containerName="collect-profiles" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.836603 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" containerName="collect-profiles" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.836751 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" containerName="collect-profiles" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.837511 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.839970 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.840334 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9b6fz" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.840504 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.840779 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.840939 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.855209 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj"] Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.954190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-config\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.954677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh8mt\" (UniqueName: \"kubernetes.io/projected/13252842-7480-4bf4-bed3-92d7e86de1ba-kube-api-access-wh8mt\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:23 crc kubenswrapper[4895]: I1206 08:45:23.954713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-dns-svc\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.056637 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh8mt\" (UniqueName: \"kubernetes.io/projected/13252842-7480-4bf4-bed3-92d7e86de1ba-kube-api-access-wh8mt\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.056695 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-dns-svc\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.056745 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-config\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.057704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-config\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.057734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-dns-svc\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.083343 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh8mt\" (UniqueName: \"kubernetes.io/projected/13252842-7480-4bf4-bed3-92d7e86de1ba-kube-api-access-wh8mt\") pod \"dnsmasq-dns-5f6ffc7dc9-8qpjj\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.155111 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.164208 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68ddc8d76c-68844"] Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.165421 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.191385 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68ddc8d76c-68844"] Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.259338 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-config\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.259400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4pn4\" (UniqueName: \"kubernetes.io/projected/5dc55496-9124-4b1e-9d42-0856b061b58c-kube-api-access-v4pn4\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.259465 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-dns-svc\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.360659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-dns-svc\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.361034 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-config\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.361069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4pn4\" (UniqueName: \"kubernetes.io/projected/5dc55496-9124-4b1e-9d42-0856b061b58c-kube-api-access-v4pn4\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.361996 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-dns-svc\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.362126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-config\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.390639 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4pn4\" (UniqueName: \"kubernetes.io/projected/5dc55496-9124-4b1e-9d42-0856b061b58c-kube-api-access-v4pn4\") pod \"dnsmasq-dns-68ddc8d76c-68844\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.569332 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.676631 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj"] Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.740076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" event={"ID":"13252842-7480-4bf4-bed3-92d7e86de1ba","Type":"ContainerStarted","Data":"2b701e7139ac8747a418d4d87ae6eee1e9ce212b553790c30cab4680cd8b881e"} Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.986132 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.988299 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.992819 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.992887 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.993439 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.993453 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 08:45:24 crc kubenswrapper[4895]: I1206 08:45:24.993952 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zw5vd" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.014534 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.021904 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68ddc8d76c-68844"] Dec 06 08:45:25 crc kubenswrapper[4895]: W1206 08:45:25.047338 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dc55496_9124_4b1e_9d42_0856b061b58c.slice/crio-f63b511449485e8154478efc9abf51453b0610090498e8191e95b69e971b7948 WatchSource:0}: Error finding container f63b511449485e8154478efc9abf51453b0610090498e8191e95b69e971b7948: Status 404 returned error can't find the container with id f63b511449485e8154478efc9abf51453b0610090498e8191e95b69e971b7948 Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082796 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082861 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082907 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082929 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lbc\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-kube-api-access-24lbc\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082966 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.082988 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.083029 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184305 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184383 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184577 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184618 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184645 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184693 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lbc\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-kube-api-access-24lbc\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.184813 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.185265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.185764 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.185902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.187521 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.187555 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1ae945f5829a78561c2276c0656b6b6a9ea56ba17faf9c14448e66177e2c63f/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.188407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.190126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.190839 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.207220 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.208813 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lbc\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-kube-api-access-24lbc\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.248568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.325205 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.326562 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.327043 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.333176 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.333291 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.333416 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.333436 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.333748 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9trmd" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.337945 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.489526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.489568 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5d651b4-cf8f-4a0b-821e-26933be91b0a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.489611 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2pp7\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-kube-api-access-d2pp7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.489770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.489966 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.489998 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.490041 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5d651b4-cf8f-4a0b-821e-26933be91b0a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.490097 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.490175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.591595 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5d651b4-cf8f-4a0b-821e-26933be91b0a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592081 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592151 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5d651b4-cf8f-4a0b-821e-26933be91b0a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2pp7\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-kube-api-access-d2pp7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592256 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592329 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592825 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.592912 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.600953 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5d651b4-cf8f-4a0b-821e-26933be91b0a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.601081 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.601119 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/33470f280f6e752ab832dd8abace1b4f365644972edc17da41d44b2954438cdf/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.601444 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.602230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.604802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.606072 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5d651b4-cf8f-4a0b-821e-26933be91b0a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.609167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2pp7\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-kube-api-access-d2pp7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.632086 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.667967 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.751540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" event={"ID":"5dc55496-9124-4b1e-9d42-0856b061b58c","Type":"ContainerStarted","Data":"f63b511449485e8154478efc9abf51453b0610090498e8191e95b69e971b7948"} Dec 06 08:45:25 crc kubenswrapper[4895]: I1206 08:45:25.839531 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:45:25 crc kubenswrapper[4895]: W1206 08:45:25.864064 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f8e9eac_08e6_4cc4_8f0d_f577bedcc771.slice/crio-b4129fbb0a8188534ba9c7f982dbc99a275b4f54818a67c0df2d240f94eb1d52 WatchSource:0}: Error finding container b4129fbb0a8188534ba9c7f982dbc99a275b4f54818a67c0df2d240f94eb1d52: Status 404 returned error can't find the container with id b4129fbb0a8188534ba9c7f982dbc99a275b4f54818a67c0df2d240f94eb1d52 Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.133807 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:45:26 crc kubenswrapper[4895]: W1206 08:45:26.142215 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d651b4_cf8f_4a0b_821e_26933be91b0a.slice/crio-5aa31d13c15f234ec1642f0dac568ca51f6fe41f1c21ea27226a8b6f3df262da WatchSource:0}: Error finding container 5aa31d13c15f234ec1642f0dac568ca51f6fe41f1c21ea27226a8b6f3df262da: Status 404 returned error can't find the container with id 5aa31d13c15f234ec1642f0dac568ca51f6fe41f1c21ea27226a8b6f3df262da Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.451619 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.453094 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.463087 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.465354 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.465426 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.465637 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hzmn7" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.466725 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.469643 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.607959 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1b3959bd-4eca-4e06-b552-7217aa74f883-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b3959bd-4eca-4e06-b552-7217aa74f883-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n774t\" (UniqueName: \"kubernetes.io/projected/1b3959bd-4eca-4e06-b552-7217aa74f883-kube-api-access-n774t\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608067 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3959bd-4eca-4e06-b552-7217aa74f883-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608317 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-config-data-default\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-kolla-config\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608599 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.608641 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709611 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-config-data-default\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709689 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-kolla-config\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709743 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1b3959bd-4eca-4e06-b552-7217aa74f883-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709838 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b3959bd-4eca-4e06-b552-7217aa74f883-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709858 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n774t\" (UniqueName: \"kubernetes.io/projected/1b3959bd-4eca-4e06-b552-7217aa74f883-kube-api-access-n774t\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.709877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3959bd-4eca-4e06-b552-7217aa74f883-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.710349 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1b3959bd-4eca-4e06-b552-7217aa74f883-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.711018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-kolla-config\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.711110 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-config-data-default\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.712913 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b3959bd-4eca-4e06-b552-7217aa74f883-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.714924 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3959bd-4eca-4e06-b552-7217aa74f883-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.715829 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b3959bd-4eca-4e06-b552-7217aa74f883-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.720544 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.720594 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ebc4f469aeffa7fb413a53aff11140ef9896688c3db15f002dd1243aef6153e9/globalmount\"" pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.739281 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n774t\" (UniqueName: \"kubernetes.io/projected/1b3959bd-4eca-4e06-b552-7217aa74f883-kube-api-access-n774t\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.764076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771","Type":"ContainerStarted","Data":"b4129fbb0a8188534ba9c7f982dbc99a275b4f54818a67c0df2d240f94eb1d52"} Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.765271 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5d651b4-cf8f-4a0b-821e-26933be91b0a","Type":"ContainerStarted","Data":"5aa31d13c15f234ec1642f0dac568ca51f6fe41f1c21ea27226a8b6f3df262da"} Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.767083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865f8cf2-d8ab-48bd-9f43-2469d44a9cc1\") pod \"openstack-galera-0\" (UID: \"1b3959bd-4eca-4e06-b552-7217aa74f883\") " pod="openstack/openstack-galera-0" Dec 06 08:45:26 crc kubenswrapper[4895]: I1206 08:45:26.788553 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.019637 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.031429 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.033712 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2w57g" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.035387 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.038362 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.123316 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1ee1786-5679-4e5b-ab42-d828e0b148a6-kolla-config\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.123402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2nj\" (UniqueName: \"kubernetes.io/projected/e1ee1786-5679-4e5b-ab42-d828e0b148a6-kube-api-access-mk2nj\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.123441 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1ee1786-5679-4e5b-ab42-d828e0b148a6-config-data\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.224805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1ee1786-5679-4e5b-ab42-d828e0b148a6-kolla-config\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.224907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk2nj\" (UniqueName: \"kubernetes.io/projected/e1ee1786-5679-4e5b-ab42-d828e0b148a6-kube-api-access-mk2nj\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.224953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1ee1786-5679-4e5b-ab42-d828e0b148a6-config-data\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.225827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1ee1786-5679-4e5b-ab42-d828e0b148a6-kolla-config\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.225967 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1ee1786-5679-4e5b-ab42-d828e0b148a6-config-data\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.268231 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk2nj\" (UniqueName: \"kubernetes.io/projected/e1ee1786-5679-4e5b-ab42-d828e0b148a6-kube-api-access-mk2nj\") pod \"memcached-0\" (UID: \"e1ee1786-5679-4e5b-ab42-d828e0b148a6\") " pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.327435 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 08:45:27 crc kubenswrapper[4895]: W1206 08:45:27.336573 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3959bd_4eca_4e06_b552_7217aa74f883.slice/crio-0b9c1889e9ed779d6da99c0682650dbcf97b4a6a449e740aee67b67972f1d1cd WatchSource:0}: Error finding container 0b9c1889e9ed779d6da99c0682650dbcf97b4a6a449e740aee67b67972f1d1cd: Status 404 returned error can't find the container with id 0b9c1889e9ed779d6da99c0682650dbcf97b4a6a449e740aee67b67972f1d1cd Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.361643 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.496270 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnffw"] Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.498221 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.528486 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnffw"] Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.632219 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-catalog-content\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.632332 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x696\" (UniqueName: \"kubernetes.io/projected/bc4bc656-402d-4496-ade5-7ff6f30105ce-kube-api-access-7x696\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.632405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-utilities\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.734851 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x696\" (UniqueName: \"kubernetes.io/projected/bc4bc656-402d-4496-ade5-7ff6f30105ce-kube-api-access-7x696\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.735260 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-utilities\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.735346 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-catalog-content\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.736060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-catalog-content\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.736079 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-utilities\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.754047 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x696\" (UniqueName: \"kubernetes.io/projected/bc4bc656-402d-4496-ade5-7ff6f30105ce-kube-api-access-7x696\") pod \"certified-operators-tnffw\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.775672 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b3959bd-4eca-4e06-b552-7217aa74f883","Type":"ContainerStarted","Data":"0b9c1889e9ed779d6da99c0682650dbcf97b4a6a449e740aee67b67972f1d1cd"} Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.837725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.837815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.981833 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 08:45:27 crc kubenswrapper[4895]: I1206 08:45:27.983244 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:27.992016 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:27.992163 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:27.992253 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:27.992384 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mjffb" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.001519 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039539 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c25c04a-8021-4126-82e9-d33e1517493f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c25c04a-8021-4126-82e9-d33e1517493f\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039583 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e7a03794-5321-4551-934e-bcf31316d825-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a03794-5321-4551-934e-bcf31316d825-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039629 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msfml\" (UniqueName: \"kubernetes.io/projected/e7a03794-5321-4551-934e-bcf31316d825-kube-api-access-msfml\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039645 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a03794-5321-4551-934e-bcf31316d825-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039679 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.039726 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.140853 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.140894 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c25c04a-8021-4126-82e9-d33e1517493f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c25c04a-8021-4126-82e9-d33e1517493f\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.140931 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e7a03794-5321-4551-934e-bcf31316d825-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.140952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a03794-5321-4551-934e-bcf31316d825-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.140979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msfml\" (UniqueName: \"kubernetes.io/projected/e7a03794-5321-4551-934e-bcf31316d825-kube-api-access-msfml\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.140996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a03794-5321-4551-934e-bcf31316d825-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.141030 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.141122 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.143359 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.147877 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e7a03794-5321-4551-934e-bcf31316d825-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.151978 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.152932 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a03794-5321-4551-934e-bcf31316d825-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.154265 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.154557 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c25c04a-8021-4126-82e9-d33e1517493f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c25c04a-8021-4126-82e9-d33e1517493f\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ceaebfd3697e2552b050fa4f26784fcac501daf5a402970d9cf2292a7a6c3b8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.159053 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a03794-5321-4551-934e-bcf31316d825-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.166656 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a03794-5321-4551-934e-bcf31316d825-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.167061 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msfml\" (UniqueName: \"kubernetes.io/projected/e7a03794-5321-4551-934e-bcf31316d825-kube-api-access-msfml\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.200984 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c25c04a-8021-4126-82e9-d33e1517493f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c25c04a-8021-4126-82e9-d33e1517493f\") pod \"openstack-cell1-galera-0\" (UID: \"e7a03794-5321-4551-934e-bcf31316d825\") " pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.321763 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.334485 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnffw"] Dec 06 08:45:28 crc kubenswrapper[4895]: W1206 08:45:28.394363 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc4bc656_402d_4496_ade5_7ff6f30105ce.slice/crio-5a5ede5cf5346af21cd72623df800f173e88bad0cfde225ea00ca80298b7c717 WatchSource:0}: Error finding container 5a5ede5cf5346af21cd72623df800f173e88bad0cfde225ea00ca80298b7c717: Status 404 returned error can't find the container with id 5a5ede5cf5346af21cd72623df800f173e88bad0cfde225ea00ca80298b7c717 Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.784210 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e1ee1786-5679-4e5b-ab42-d828e0b148a6","Type":"ContainerStarted","Data":"3d9bf8fb83bdbdd44a937c88378bbea1308d27d1ea80998722692743cfd045ec"} Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.786359 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerID="97e38df14db8ba830636d8f01cd375782098da67862c70e9b5c5119ea376b663" exitCode=0 Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.786395 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerDied","Data":"97e38df14db8ba830636d8f01cd375782098da67862c70e9b5c5119ea376b663"} Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.786616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerStarted","Data":"5a5ede5cf5346af21cd72623df800f173e88bad0cfde225ea00ca80298b7c717"} Dec 06 08:45:28 crc kubenswrapper[4895]: I1206 08:45:28.877058 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 08:45:28 crc kubenswrapper[4895]: W1206 08:45:28.883064 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a03794_5321_4551_934e_bcf31316d825.slice/crio-5be71a29b4dd84d2a74ae784d6038a28d57b5396a22fdb406aa496964ca0e57d WatchSource:0}: Error finding container 5be71a29b4dd84d2a74ae784d6038a28d57b5396a22fdb406aa496964ca0e57d: Status 404 returned error can't find the container with id 5be71a29b4dd84d2a74ae784d6038a28d57b5396a22fdb406aa496964ca0e57d Dec 06 08:45:29 crc kubenswrapper[4895]: I1206 08:45:29.696163 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:45:29 crc kubenswrapper[4895]: I1206 08:45:29.696217 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:45:29 crc kubenswrapper[4895]: I1206 08:45:29.799002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e7a03794-5321-4551-934e-bcf31316d825","Type":"ContainerStarted","Data":"5be71a29b4dd84d2a74ae784d6038a28d57b5396a22fdb406aa496964ca0e57d"} Dec 06 08:45:29 crc kubenswrapper[4895]: I1206 08:45:29.803622 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerStarted","Data":"d73bc77b2d43921ccebeb87ee5999b660b062f312c60afdf670abd45b1bcf075"} Dec 06 08:45:30 crc kubenswrapper[4895]: I1206 08:45:30.815203 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerID="d73bc77b2d43921ccebeb87ee5999b660b062f312c60afdf670abd45b1bcf075" exitCode=0 Dec 06 08:45:30 crc kubenswrapper[4895]: I1206 08:45:30.815258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerDied","Data":"d73bc77b2d43921ccebeb87ee5999b660b062f312c60afdf670abd45b1bcf075"} Dec 06 08:45:31 crc kubenswrapper[4895]: I1206 08:45:31.833980 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerStarted","Data":"449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b"} Dec 06 08:45:31 crc kubenswrapper[4895]: I1206 08:45:31.852958 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnffw" podStartSLOduration=2.414272504 podStartE2EDuration="4.852925266s" podCreationTimestamp="2025-12-06 08:45:27 +0000 UTC" firstStartedPulling="2025-12-06 08:45:28.788301329 +0000 UTC m=+6491.189690199" lastFinishedPulling="2025-12-06 08:45:31.226954091 +0000 UTC m=+6493.628342961" observedRunningTime="2025-12-06 08:45:31.851110207 +0000 UTC m=+6494.252499097" watchObservedRunningTime="2025-12-06 08:45:31.852925266 +0000 UTC m=+6494.254314136" Dec 06 08:45:37 crc kubenswrapper[4895]: I1206 08:45:37.838927 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:37 crc kubenswrapper[4895]: I1206 08:45:37.839460 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:37 crc kubenswrapper[4895]: I1206 08:45:37.886786 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:37 crc kubenswrapper[4895]: I1206 08:45:37.974043 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:38 crc kubenswrapper[4895]: I1206 08:45:38.122322 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnffw"] Dec 06 08:45:39 crc kubenswrapper[4895]: I1206 08:45:39.914446 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnffw" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="registry-server" containerID="cri-o://449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b" gracePeriod=2 Dec 06 08:45:40 crc kubenswrapper[4895]: I1206 08:45:40.923014 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerID="449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b" exitCode=0 Dec 06 08:45:40 crc kubenswrapper[4895]: I1206 08:45:40.923059 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerDied","Data":"449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b"} Dec 06 08:45:47 crc kubenswrapper[4895]: E1206 08:45:47.838758 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b is running failed: container process not found" containerID="449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 08:45:47 crc kubenswrapper[4895]: E1206 08:45:47.839583 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b is running failed: container process not found" containerID="449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 08:45:47 crc kubenswrapper[4895]: E1206 08:45:47.840212 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b is running failed: container process not found" containerID="449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 08:45:47 crc kubenswrapper[4895]: E1206 08:45:47.840240 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-tnffw" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="registry-server" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.200129 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.200203 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.200376 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n774t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(1b3959bd-4eca-4e06-b552-7217aa74f883): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.201654 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="1b3959bd-4eca-4e06-b552-7217aa74f883" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.237661 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.237729 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.237889 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msfml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(e7a03794-5321-4551-934e-bcf31316d825): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.239665 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="e7a03794-5321-4551-934e-bcf31316d825" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.973063 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.973690 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.973970 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2pp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a5d651b4-cf8f-4a0b-821e-26933be91b0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.975505 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.992065 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.992123 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.992290 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh8mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f6ffc7dc9-8qpjj_openstack(13252842-7480-4bf4-bed3-92d7e86de1ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 08:45:49 crc kubenswrapper[4895]: E1206 08:45:49.993451 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.009139 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.011268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnffw" event={"ID":"bc4bc656-402d-4496-ade5-7ff6f30105ce","Type":"ContainerDied","Data":"5a5ede5cf5346af21cd72623df800f173e88bad0cfde225ea00ca80298b7c717"} Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.011420 4895 scope.go:117] "RemoveContainer" containerID="449d57830abc8296294b41459f5f50941c4c0a37e521ce786b724e2fb20cc56b" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.012286 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="e7a03794-5321-4551-934e-bcf31316d825" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.012434 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/openstack-galera-0" podUID="1b3959bd-4eca-4e06-b552-7217aa74f883" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.012463 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.027176 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.027219 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.027336 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h564h676h699hcdh67bh66hfdh569h545h648h94h546h696h668h89h96h667h575h595h5d9h584h8dhbdh697h54bhb7h58fh5c9hd8h5cdh5c7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4pn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-68ddc8d76c-68844_openstack(5dc55496-9124-4b1e-9d42-0856b061b58c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 08:45:50 crc kubenswrapper[4895]: E1206 08:45:50.028973 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.130264 4895 scope.go:117] "RemoveContainer" containerID="d73bc77b2d43921ccebeb87ee5999b660b062f312c60afdf670abd45b1bcf075" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.163182 4895 scope.go:117] "RemoveContainer" containerID="97e38df14db8ba830636d8f01cd375782098da67862c70e9b5c5119ea376b663" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.179831 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x696\" (UniqueName: \"kubernetes.io/projected/bc4bc656-402d-4496-ade5-7ff6f30105ce-kube-api-access-7x696\") pod \"bc4bc656-402d-4496-ade5-7ff6f30105ce\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.179911 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-utilities\") pod \"bc4bc656-402d-4496-ade5-7ff6f30105ce\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.179982 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-catalog-content\") pod \"bc4bc656-402d-4496-ade5-7ff6f30105ce\" (UID: \"bc4bc656-402d-4496-ade5-7ff6f30105ce\") " Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.180958 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-utilities" (OuterVolumeSpecName: "utilities") pod "bc4bc656-402d-4496-ade5-7ff6f30105ce" (UID: "bc4bc656-402d-4496-ade5-7ff6f30105ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.183085 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.192702 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4bc656-402d-4496-ade5-7ff6f30105ce-kube-api-access-7x696" (OuterVolumeSpecName: "kube-api-access-7x696") pod "bc4bc656-402d-4496-ade5-7ff6f30105ce" (UID: "bc4bc656-402d-4496-ade5-7ff6f30105ce"). InnerVolumeSpecName "kube-api-access-7x696". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.229022 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc4bc656-402d-4496-ade5-7ff6f30105ce" (UID: "bc4bc656-402d-4496-ade5-7ff6f30105ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.284724 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc4bc656-402d-4496-ade5-7ff6f30105ce-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:45:50 crc kubenswrapper[4895]: I1206 08:45:50.284754 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x696\" (UniqueName: \"kubernetes.io/projected/bc4bc656-402d-4496-ade5-7ff6f30105ce-kube-api-access-7x696\") on node \"crc\" DevicePath \"\"" Dec 06 08:45:51 crc kubenswrapper[4895]: I1206 08:45:51.023423 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnffw" Dec 06 08:45:51 crc kubenswrapper[4895]: I1206 08:45:51.027774 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e1ee1786-5679-4e5b-ab42-d828e0b148a6","Type":"ContainerStarted","Data":"4c5f20d88b6efcc041a2c1993524d29cfb2908558ca3e401ccc4092d637781b2"} Dec 06 08:45:51 crc kubenswrapper[4895]: I1206 08:45:51.028064 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 06 08:45:51 crc kubenswrapper[4895]: E1206 08:45:51.030519 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" Dec 06 08:45:51 crc kubenswrapper[4895]: I1206 08:45:51.062009 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.927613634 podStartE2EDuration="24.061981713s" podCreationTimestamp="2025-12-06 08:45:27 +0000 UTC" firstStartedPulling="2025-12-06 08:45:27.849463954 +0000 UTC m=+6490.250852824" lastFinishedPulling="2025-12-06 08:45:49.983832033 +0000 UTC m=+6512.385220903" observedRunningTime="2025-12-06 08:45:51.052708275 +0000 UTC m=+6513.454097155" watchObservedRunningTime="2025-12-06 08:45:51.061981713 +0000 UTC m=+6513.463370593" Dec 06 08:45:51 crc kubenswrapper[4895]: I1206 08:45:51.099154 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnffw"] Dec 06 08:45:51 crc kubenswrapper[4895]: I1206 08:45:51.107802 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnffw"] Dec 06 08:45:52 crc kubenswrapper[4895]: I1206 08:45:52.040386 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5d651b4-cf8f-4a0b-821e-26933be91b0a","Type":"ContainerStarted","Data":"e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33"} Dec 06 08:45:52 crc kubenswrapper[4895]: I1206 08:45:52.043152 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771","Type":"ContainerStarted","Data":"a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c"} Dec 06 08:45:52 crc kubenswrapper[4895]: I1206 08:45:52.066211 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" path="/var/lib/kubelet/pods/bc4bc656-402d-4496-ade5-7ff6f30105ce/volumes" Dec 06 08:45:57 crc kubenswrapper[4895]: I1206 08:45:57.362838 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 06 08:45:59 crc kubenswrapper[4895]: I1206 08:45:59.696437 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:45:59 crc kubenswrapper[4895]: I1206 08:45:59.696942 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:46:08 crc kubenswrapper[4895]: I1206 08:46:08.312175 4895 generic.go:334] "Generic (PLEG): container finished" podID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerID="02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164" exitCode=0 Dec 06 08:46:08 crc kubenswrapper[4895]: I1206 08:46:08.312366 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" event={"ID":"13252842-7480-4bf4-bed3-92d7e86de1ba","Type":"ContainerDied","Data":"02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164"} Dec 06 08:46:08 crc kubenswrapper[4895]: I1206 08:46:08.316066 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e7a03794-5321-4551-934e-bcf31316d825","Type":"ContainerStarted","Data":"4c94c42a9ff55cb832728692f9984849c7d6ee1887850ad6075fe8d1b698d581"} Dec 06 08:46:08 crc kubenswrapper[4895]: I1206 08:46:08.318408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b3959bd-4eca-4e06-b552-7217aa74f883","Type":"ContainerStarted","Data":"6707c820af7d90e51899b5c5794eff090b1f88ef967e1b4cfffefbb34cb1ad6d"} Dec 06 08:46:08 crc kubenswrapper[4895]: I1206 08:46:08.325524 4895 generic.go:334] "Generic (PLEG): container finished" podID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerID="7a4fb925432adf5a1e9640dcdd3de1c0444c60f699be04361111c8243aaee4ec" exitCode=0 Dec 06 08:46:08 crc kubenswrapper[4895]: I1206 08:46:08.325585 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" event={"ID":"5dc55496-9124-4b1e-9d42-0856b061b58c","Type":"ContainerDied","Data":"7a4fb925432adf5a1e9640dcdd3de1c0444c60f699be04361111c8243aaee4ec"} Dec 06 08:46:09 crc kubenswrapper[4895]: I1206 08:46:09.336973 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" event={"ID":"5dc55496-9124-4b1e-9d42-0856b061b58c","Type":"ContainerStarted","Data":"e3a703ee4f6c231de386294d7b169fcb4d7ea059f599255a7c69f7050a48263a"} Dec 06 08:46:09 crc kubenswrapper[4895]: I1206 08:46:09.337528 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:46:09 crc kubenswrapper[4895]: I1206 08:46:09.338965 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" event={"ID":"13252842-7480-4bf4-bed3-92d7e86de1ba","Type":"ContainerStarted","Data":"f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93"} Dec 06 08:46:09 crc kubenswrapper[4895]: I1206 08:46:09.339164 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:46:09 crc kubenswrapper[4895]: I1206 08:46:09.372855 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" podStartSLOduration=2.8185105 podStartE2EDuration="45.372825305s" podCreationTimestamp="2025-12-06 08:45:24 +0000 UTC" firstStartedPulling="2025-12-06 08:45:25.053694626 +0000 UTC m=+6487.455083496" lastFinishedPulling="2025-12-06 08:46:07.608009431 +0000 UTC m=+6530.009398301" observedRunningTime="2025-12-06 08:46:09.359390044 +0000 UTC m=+6531.760778924" watchObservedRunningTime="2025-12-06 08:46:09.372825305 +0000 UTC m=+6531.774214265" Dec 06 08:46:09 crc kubenswrapper[4895]: I1206 08:46:09.380441 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" podStartSLOduration=3.544677927 podStartE2EDuration="46.380420528s" podCreationTimestamp="2025-12-06 08:45:23 +0000 UTC" firstStartedPulling="2025-12-06 08:45:24.682191055 +0000 UTC m=+6487.083579915" lastFinishedPulling="2025-12-06 08:46:07.517933636 +0000 UTC m=+6529.919322516" observedRunningTime="2025-12-06 08:46:09.376982756 +0000 UTC m=+6531.778371666" watchObservedRunningTime="2025-12-06 08:46:09.380420528 +0000 UTC m=+6531.781809398" Dec 06 08:46:11 crc kubenswrapper[4895]: I1206 08:46:11.384843 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7a03794-5321-4551-934e-bcf31316d825" containerID="4c94c42a9ff55cb832728692f9984849c7d6ee1887850ad6075fe8d1b698d581" exitCode=0 Dec 06 08:46:11 crc kubenswrapper[4895]: I1206 08:46:11.384823 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e7a03794-5321-4551-934e-bcf31316d825","Type":"ContainerDied","Data":"4c94c42a9ff55cb832728692f9984849c7d6ee1887850ad6075fe8d1b698d581"} Dec 06 08:46:11 crc kubenswrapper[4895]: I1206 08:46:11.387185 4895 generic.go:334] "Generic (PLEG): container finished" podID="1b3959bd-4eca-4e06-b552-7217aa74f883" containerID="6707c820af7d90e51899b5c5794eff090b1f88ef967e1b4cfffefbb34cb1ad6d" exitCode=0 Dec 06 08:46:11 crc kubenswrapper[4895]: I1206 08:46:11.387229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b3959bd-4eca-4e06-b552-7217aa74f883","Type":"ContainerDied","Data":"6707c820af7d90e51899b5c5794eff090b1f88ef967e1b4cfffefbb34cb1ad6d"} Dec 06 08:46:12 crc kubenswrapper[4895]: I1206 08:46:12.400088 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e7a03794-5321-4551-934e-bcf31316d825","Type":"ContainerStarted","Data":"3df980cd8c1becf5e130b520f3209fba40cb9e6271ed9e8b3029f3dcf34fa360"} Dec 06 08:46:12 crc kubenswrapper[4895]: I1206 08:46:12.403638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b3959bd-4eca-4e06-b552-7217aa74f883","Type":"ContainerStarted","Data":"f10bf78dde1d4e44df55348cf8e0fc2b49494997380b10564a4b793a3f7d414d"} Dec 06 08:46:12 crc kubenswrapper[4895]: I1206 08:46:12.435515 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.804067197 podStartE2EDuration="46.43549099s" podCreationTimestamp="2025-12-06 08:45:26 +0000 UTC" firstStartedPulling="2025-12-06 08:45:28.885222698 +0000 UTC m=+6491.286611568" lastFinishedPulling="2025-12-06 08:46:07.516646491 +0000 UTC m=+6529.918035361" observedRunningTime="2025-12-06 08:46:12.430967249 +0000 UTC m=+6534.832356119" watchObservedRunningTime="2025-12-06 08:46:12.43549099 +0000 UTC m=+6534.836879870" Dec 06 08:46:12 crc kubenswrapper[4895]: I1206 08:46:12.471725 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.21927291 podStartE2EDuration="47.471698201s" podCreationTimestamp="2025-12-06 08:45:25 +0000 UTC" firstStartedPulling="2025-12-06 08:45:27.3555729 +0000 UTC m=+6489.756961770" lastFinishedPulling="2025-12-06 08:46:07.607998191 +0000 UTC m=+6530.009387061" observedRunningTime="2025-12-06 08:46:12.45898508 +0000 UTC m=+6534.860373970" watchObservedRunningTime="2025-12-06 08:46:12.471698201 +0000 UTC m=+6534.873087111" Dec 06 08:46:14 crc kubenswrapper[4895]: I1206 08:46:14.156684 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:46:14 crc kubenswrapper[4895]: I1206 08:46:14.572047 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:46:14 crc kubenswrapper[4895]: I1206 08:46:14.642536 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj"] Dec 06 08:46:14 crc kubenswrapper[4895]: I1206 08:46:14.642819 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerName="dnsmasq-dns" containerID="cri-o://f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93" gracePeriod=10 Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.129289 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.234615 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh8mt\" (UniqueName: \"kubernetes.io/projected/13252842-7480-4bf4-bed3-92d7e86de1ba-kube-api-access-wh8mt\") pod \"13252842-7480-4bf4-bed3-92d7e86de1ba\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.234675 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-dns-svc\") pod \"13252842-7480-4bf4-bed3-92d7e86de1ba\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.234735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-config\") pod \"13252842-7480-4bf4-bed3-92d7e86de1ba\" (UID: \"13252842-7480-4bf4-bed3-92d7e86de1ba\") " Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.252722 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13252842-7480-4bf4-bed3-92d7e86de1ba-kube-api-access-wh8mt" (OuterVolumeSpecName: "kube-api-access-wh8mt") pod "13252842-7480-4bf4-bed3-92d7e86de1ba" (UID: "13252842-7480-4bf4-bed3-92d7e86de1ba"). InnerVolumeSpecName "kube-api-access-wh8mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.277330 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-config" (OuterVolumeSpecName: "config") pod "13252842-7480-4bf4-bed3-92d7e86de1ba" (UID: "13252842-7480-4bf4-bed3-92d7e86de1ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.285755 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13252842-7480-4bf4-bed3-92d7e86de1ba" (UID: "13252842-7480-4bf4-bed3-92d7e86de1ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.336989 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh8mt\" (UniqueName: \"kubernetes.io/projected/13252842-7480-4bf4-bed3-92d7e86de1ba-kube-api-access-wh8mt\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.337028 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.337037 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13252842-7480-4bf4-bed3-92d7e86de1ba-config\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.431207 4895 generic.go:334] "Generic (PLEG): container finished" podID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerID="f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93" exitCode=0 Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.431269 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" event={"ID":"13252842-7480-4bf4-bed3-92d7e86de1ba","Type":"ContainerDied","Data":"f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93"} Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.431310 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.431337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj" event={"ID":"13252842-7480-4bf4-bed3-92d7e86de1ba","Type":"ContainerDied","Data":"2b701e7139ac8747a418d4d87ae6eee1e9ce212b553790c30cab4680cd8b881e"} Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.431356 4895 scope.go:117] "RemoveContainer" containerID="f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.462230 4895 scope.go:117] "RemoveContainer" containerID="02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.464307 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj"] Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.470541 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f6ffc7dc9-8qpjj"] Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.490764 4895 scope.go:117] "RemoveContainer" containerID="f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93" Dec 06 08:46:15 crc kubenswrapper[4895]: E1206 08:46:15.491220 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93\": container with ID starting with f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93 not found: ID does not exist" containerID="f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.491266 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93"} err="failed to get container status \"f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93\": rpc error: code = NotFound desc = could not find container \"f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93\": container with ID starting with f74fa9d09360d373daef76c54a3d8604b9f17d131d0de0e04f09402aeeaa2a93 not found: ID does not exist" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.491294 4895 scope.go:117] "RemoveContainer" containerID="02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164" Dec 06 08:46:15 crc kubenswrapper[4895]: E1206 08:46:15.491652 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164\": container with ID starting with 02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164 not found: ID does not exist" containerID="02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164" Dec 06 08:46:15 crc kubenswrapper[4895]: I1206 08:46:15.491687 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164"} err="failed to get container status \"02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164\": rpc error: code = NotFound desc = could not find container \"02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164\": container with ID starting with 02ba2854f8e0aa0c3acaf1de183dce67da2a15c1b592f4ce44970e445db37164 not found: ID does not exist" Dec 06 08:46:16 crc kubenswrapper[4895]: I1206 08:46:16.060626 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" path="/var/lib/kubelet/pods/13252842-7480-4bf4-bed3-92d7e86de1ba/volumes" Dec 06 08:46:16 crc kubenswrapper[4895]: I1206 08:46:16.789238 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 06 08:46:16 crc kubenswrapper[4895]: I1206 08:46:16.789564 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 06 08:46:16 crc kubenswrapper[4895]: I1206 08:46:16.873701 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 06 08:46:17 crc kubenswrapper[4895]: I1206 08:46:17.564716 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 06 08:46:18 crc kubenswrapper[4895]: I1206 08:46:18.322787 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 06 08:46:18 crc kubenswrapper[4895]: I1206 08:46:18.323192 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 06 08:46:18 crc kubenswrapper[4895]: I1206 08:46:18.420353 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 06 08:46:18 crc kubenswrapper[4895]: I1206 08:46:18.551978 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 06 08:46:24 crc kubenswrapper[4895]: I1206 08:46:24.518591 4895 generic.go:334] "Generic (PLEG): container finished" podID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerID="e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33" exitCode=0 Dec 06 08:46:24 crc kubenswrapper[4895]: I1206 08:46:24.518663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5d651b4-cf8f-4a0b-821e-26933be91b0a","Type":"ContainerDied","Data":"e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33"} Dec 06 08:46:24 crc kubenswrapper[4895]: I1206 08:46:24.521323 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771","Type":"ContainerDied","Data":"a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c"} Dec 06 08:46:24 crc kubenswrapper[4895]: I1206 08:46:24.521321 4895 generic.go:334] "Generic (PLEG): container finished" podID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerID="a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c" exitCode=0 Dec 06 08:46:25 crc kubenswrapper[4895]: I1206 08:46:25.529462 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771","Type":"ContainerStarted","Data":"dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020"} Dec 06 08:46:25 crc kubenswrapper[4895]: I1206 08:46:25.530042 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 08:46:25 crc kubenswrapper[4895]: I1206 08:46:25.531647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5d651b4-cf8f-4a0b-821e-26933be91b0a","Type":"ContainerStarted","Data":"91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432"} Dec 06 08:46:25 crc kubenswrapper[4895]: I1206 08:46:25.531843 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:25 crc kubenswrapper[4895]: I1206 08:46:25.552517 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.414058521 podStartE2EDuration="1m2.552495829s" podCreationTimestamp="2025-12-06 08:45:23 +0000 UTC" firstStartedPulling="2025-12-06 08:45:25.867866218 +0000 UTC m=+6488.269255088" lastFinishedPulling="2025-12-06 08:45:50.006303516 +0000 UTC m=+6512.407692396" observedRunningTime="2025-12-06 08:46:25.550303831 +0000 UTC m=+6547.951692711" watchObservedRunningTime="2025-12-06 08:46:25.552495829 +0000 UTC m=+6547.953884699" Dec 06 08:46:25 crc kubenswrapper[4895]: I1206 08:46:25.575508 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371975.279285 podStartE2EDuration="1m1.575489946s" podCreationTimestamp="2025-12-06 08:45:24 +0000 UTC" firstStartedPulling="2025-12-06 08:45:26.145034911 +0000 UTC m=+6488.546423771" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:46:25.571952061 +0000 UTC m=+6547.973340951" watchObservedRunningTime="2025-12-06 08:46:25.575489946 +0000 UTC m=+6547.976878836" Dec 06 08:46:29 crc kubenswrapper[4895]: I1206 08:46:29.696588 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:46:29 crc kubenswrapper[4895]: I1206 08:46:29.696992 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:46:29 crc kubenswrapper[4895]: I1206 08:46:29.697060 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:46:29 crc kubenswrapper[4895]: I1206 08:46:29.697831 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:46:29 crc kubenswrapper[4895]: I1206 08:46:29.697902 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" gracePeriod=600 Dec 06 08:46:29 crc kubenswrapper[4895]: E1206 08:46:29.829898 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:46:30 crc kubenswrapper[4895]: I1206 08:46:30.575202 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" exitCode=0 Dec 06 08:46:30 crc kubenswrapper[4895]: I1206 08:46:30.575259 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684"} Dec 06 08:46:30 crc kubenswrapper[4895]: I1206 08:46:30.575315 4895 scope.go:117] "RemoveContainer" containerID="a00fec192193020904dedce9aa39c5d891233a290dfd499a30b7f87d36b50b8c" Dec 06 08:46:30 crc kubenswrapper[4895]: I1206 08:46:30.575995 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:46:30 crc kubenswrapper[4895]: E1206 08:46:30.576336 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:46:35 crc kubenswrapper[4895]: I1206 08:46:35.331532 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 08:46:35 crc kubenswrapper[4895]: I1206 08:46:35.671707 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.061331 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb"] Dec 06 08:46:40 crc kubenswrapper[4895]: E1206 08:46:40.062212 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="registry-server" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062231 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="registry-server" Dec 06 08:46:40 crc kubenswrapper[4895]: E1206 08:46:40.062249 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerName="dnsmasq-dns" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062257 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerName="dnsmasq-dns" Dec 06 08:46:40 crc kubenswrapper[4895]: E1206 08:46:40.062272 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="extract-utilities" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="extract-utilities" Dec 06 08:46:40 crc kubenswrapper[4895]: E1206 08:46:40.062288 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="extract-content" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062293 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="extract-content" Dec 06 08:46:40 crc kubenswrapper[4895]: E1206 08:46:40.062310 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerName="init" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062316 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerName="init" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062489 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="13252842-7480-4bf4-bed3-92d7e86de1ba" containerName="dnsmasq-dns" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.062509 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4bc656-402d-4496-ade5-7ff6f30105ce" containerName="registry-server" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.063389 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.076085 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb"] Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.154984 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-dns-svc\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.155064 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wzm\" (UniqueName: \"kubernetes.io/projected/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-kube-api-access-j2wzm\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.155107 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-config\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.256153 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wzm\" (UniqueName: \"kubernetes.io/projected/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-kube-api-access-j2wzm\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.256213 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-config\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.256283 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-dns-svc\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.257153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-dns-svc\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.257224 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-config\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.281633 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wzm\" (UniqueName: \"kubernetes.io/projected/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-kube-api-access-j2wzm\") pod \"dnsmasq-dns-6f7f6bbcbf-pjmtb\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.392975 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.697117 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:46:40 crc kubenswrapper[4895]: I1206 08:46:40.826081 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb"] Dec 06 08:46:41 crc kubenswrapper[4895]: I1206 08:46:41.328545 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:46:41 crc kubenswrapper[4895]: I1206 08:46:41.705844 4895 generic.go:334] "Generic (PLEG): container finished" podID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerID="82ceb4c618d804955a0ce7527d1a375013f7df7ca0dc2b0c408682ae5f3f06f8" exitCode=0 Dec 06 08:46:41 crc kubenswrapper[4895]: I1206 08:46:41.705909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" event={"ID":"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e","Type":"ContainerDied","Data":"82ceb4c618d804955a0ce7527d1a375013f7df7ca0dc2b0c408682ae5f3f06f8"} Dec 06 08:46:41 crc kubenswrapper[4895]: I1206 08:46:41.706312 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" event={"ID":"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e","Type":"ContainerStarted","Data":"a3b78cc68b6560d645b6cb396dd73a4dce1e58d5768d0f685b1d5f25b6165e73"} Dec 06 08:46:42 crc kubenswrapper[4895]: I1206 08:46:42.050953 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:46:42 crc kubenswrapper[4895]: E1206 08:46:42.051536 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:46:42 crc kubenswrapper[4895]: I1206 08:46:42.561895 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="rabbitmq" containerID="cri-o://dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020" gracePeriod=604799 Dec 06 08:46:42 crc kubenswrapper[4895]: I1206 08:46:42.716044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" event={"ID":"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e","Type":"ContainerStarted","Data":"31bb9e075211687b3e53580e31370331a172f355f1367740e8fdb9bcaaf1acc2"} Dec 06 08:46:42 crc kubenswrapper[4895]: I1206 08:46:42.716179 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:43 crc kubenswrapper[4895]: I1206 08:46:43.046353 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="rabbitmq" containerID="cri-o://91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432" gracePeriod=604799 Dec 06 08:46:45 crc kubenswrapper[4895]: I1206 08:46:45.328262 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.251:5672: connect: connection refused" Dec 06 08:46:45 crc kubenswrapper[4895]: I1206 08:46:45.669132 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.252:5672: connect: connection refused" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.153561 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.192845 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" podStartSLOduration=9.192776258 podStartE2EDuration="9.192776258s" podCreationTimestamp="2025-12-06 08:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:46:42.741244392 +0000 UTC m=+6565.142633282" watchObservedRunningTime="2025-12-06 08:46:49.192776258 +0000 UTC m=+6571.594165128" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322289 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-confd\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-erlang-cookie-secret\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322631 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-server-conf\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322665 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-pod-info\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322718 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24lbc\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-kube-api-access-24lbc\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-plugins-conf\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-plugins\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.322810 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-erlang-cookie\") pod \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\" (UID: \"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.323591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.324250 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.324544 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.328576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-kube-api-access-24lbc" (OuterVolumeSpecName: "kube-api-access-24lbc") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "kube-api-access-24lbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.336626 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.336703 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-pod-info" (OuterVolumeSpecName: "pod-info") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.347118 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b" (OuterVolumeSpecName: "persistence") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.348642 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-server-conf" (OuterVolumeSpecName: "server-conf") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.417343 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" (UID: "1f8e9eac-08e6-4cc4-8f0d-f577bedcc771"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424218 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424247 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424257 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24lbc\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-kube-api-access-24lbc\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424267 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424274 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424284 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424292 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424300 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.424345 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") on node \"crc\" " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.451220 4895 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.451462 4895 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b") on node "crc" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.526050 4895 reconciler_common.go:293] "Volume detached for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.558692 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730392 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5d651b4-cf8f-4a0b-821e-26933be91b0a-erlang-cookie-secret\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730558 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2pp7\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-kube-api-access-d2pp7\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730585 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5d651b4-cf8f-4a0b-821e-26933be91b0a-pod-info\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-erlang-cookie\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730686 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-server-conf\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730795 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-plugins-conf\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730858 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-plugins\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.730911 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-confd\") pod \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\" (UID: \"a5d651b4-cf8f-4a0b-821e-26933be91b0a\") " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.731209 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.731449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.731519 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.734626 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a5d651b4-cf8f-4a0b-821e-26933be91b0a-pod-info" (OuterVolumeSpecName: "pod-info") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.734860 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d651b4-cf8f-4a0b-821e-26933be91b0a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.735508 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-kube-api-access-d2pp7" (OuterVolumeSpecName: "kube-api-access-d2pp7") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "kube-api-access-d2pp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.742964 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312" (OuterVolumeSpecName: "persistence") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.757117 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-server-conf" (OuterVolumeSpecName: "server-conf") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.792171 4895 generic.go:334] "Generic (PLEG): container finished" podID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerID="dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020" exitCode=0 Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.792244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771","Type":"ContainerDied","Data":"dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020"} Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.792279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f8e9eac-08e6-4cc4-8f0d-f577bedcc771","Type":"ContainerDied","Data":"b4129fbb0a8188534ba9c7f982dbc99a275b4f54818a67c0df2d240f94eb1d52"} Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.792310 4895 scope.go:117] "RemoveContainer" containerID="dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.792457 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.796382 4895 generic.go:334] "Generic (PLEG): container finished" podID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerID="91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432" exitCode=0 Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.796431 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5d651b4-cf8f-4a0b-821e-26933be91b0a","Type":"ContainerDied","Data":"91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432"} Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.796463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5d651b4-cf8f-4a0b-821e-26933be91b0a","Type":"ContainerDied","Data":"5aa31d13c15f234ec1642f0dac568ca51f6fe41f1c21ea27226a8b6f3df262da"} Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.796566 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.820455 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a5d651b4-cf8f-4a0b-821e-26933be91b0a" (UID: "a5d651b4-cf8f-4a0b-821e-26933be91b0a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832819 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832890 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") on node \"crc\" " Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832904 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5d651b4-cf8f-4a0b-821e-26933be91b0a-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832913 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832922 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832930 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5d651b4-cf8f-4a0b-821e-26933be91b0a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832954 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2pp7\" (UniqueName: \"kubernetes.io/projected/a5d651b4-cf8f-4a0b-821e-26933be91b0a-kube-api-access-d2pp7\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832963 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5d651b4-cf8f-4a0b-821e-26933be91b0a-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.832972 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5d651b4-cf8f-4a0b-821e-26933be91b0a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.848224 4895 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.848499 4895 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312") on node "crc" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.876597 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.883221 4895 scope.go:117] "RemoveContainer" containerID="a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.895647 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.902605 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.903068 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="setup-container" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.903094 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="setup-container" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.903115 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="rabbitmq" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.903125 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="rabbitmq" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.903149 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="rabbitmq" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.903158 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="rabbitmq" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.903172 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="setup-container" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.903180 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="setup-container" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.903429 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" containerName="rabbitmq" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.903459 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" containerName="rabbitmq" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.905102 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.907838 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.908179 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.908325 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.908448 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.908681 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zw5vd" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.923055 4895 scope.go:117] "RemoveContainer" containerID="dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.923981 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020\": container with ID starting with dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020 not found: ID does not exist" containerID="dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.924136 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020"} err="failed to get container status \"dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020\": rpc error: code = NotFound desc = could not find container \"dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020\": container with ID starting with dc8814581867da797e761b913db4e13f46de0d2cf9019c389974610a250f8020 not found: ID does not exist" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.924246 4895 scope.go:117] "RemoveContainer" containerID="a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.924733 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c\": container with ID starting with a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c not found: ID does not exist" containerID="a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.924845 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c"} err="failed to get container status \"a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c\": rpc error: code = NotFound desc = could not find container \"a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c\": container with ID starting with a4037b5cc459e2759dd90451518e0cff0bbd9e57ead4373a0c09013da92bbc1c not found: ID does not exist" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.924958 4895 scope.go:117] "RemoveContainer" containerID="91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.931747 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.933940 4895 reconciler_common.go:293] "Volume detached for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.962087 4895 scope.go:117] "RemoveContainer" containerID="e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.981984 4895 scope.go:117] "RemoveContainer" containerID="91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.982299 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432\": container with ID starting with 91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432 not found: ID does not exist" containerID="91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.982330 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432"} err="failed to get container status \"91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432\": rpc error: code = NotFound desc = could not find container \"91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432\": container with ID starting with 91ec7ec37fcdeb4182c4b88bdc2b1e8216a6a3ab0255b680bc736788c9a0f432 not found: ID does not exist" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.982352 4895 scope.go:117] "RemoveContainer" containerID="e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33" Dec 06 08:46:49 crc kubenswrapper[4895]: E1206 08:46:49.982696 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33\": container with ID starting with e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33 not found: ID does not exist" containerID="e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33" Dec 06 08:46:49 crc kubenswrapper[4895]: I1206 08:46:49.982717 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33"} err="failed to get container status \"e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33\": rpc error: code = NotFound desc = could not find container \"e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33\": container with ID starting with e56841ef76a7288499b482028860c7b498b835a2947a17991e769c441912ba33 not found: ID does not exist" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.035600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27ea9905-46c5-48e1-a558-7c8e87a4cea7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.035656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcd9\" (UniqueName: \"kubernetes.io/projected/27ea9905-46c5-48e1-a558-7c8e87a4cea7-kube-api-access-cwcd9\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.035773 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.035980 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27ea9905-46c5-48e1-a558-7c8e87a4cea7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.036050 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.036086 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27ea9905-46c5-48e1-a558-7c8e87a4cea7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.036205 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27ea9905-46c5-48e1-a558-7c8e87a4cea7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.036253 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.036403 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.058711 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8e9eac-08e6-4cc4-8f0d-f577bedcc771" path="/var/lib/kubelet/pods/1f8e9eac-08e6-4cc4-8f0d-f577bedcc771/volumes" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.116839 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.126811 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27ea9905-46c5-48e1-a558-7c8e87a4cea7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139697 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27ea9905-46c5-48e1-a558-7c8e87a4cea7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27ea9905-46c5-48e1-a558-7c8e87a4cea7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139846 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139942 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.139979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27ea9905-46c5-48e1-a558-7c8e87a4cea7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.140030 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcd9\" (UniqueName: \"kubernetes.io/projected/27ea9905-46c5-48e1-a558-7c8e87a4cea7-kube-api-access-cwcd9\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.140062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.140874 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.141147 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.141151 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27ea9905-46c5-48e1-a558-7c8e87a4cea7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.142538 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27ea9905-46c5-48e1-a558-7c8e87a4cea7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.147773 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27ea9905-46c5-48e1-a558-7c8e87a4cea7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.161541 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.161590 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1ae945f5829a78561c2276c0656b6b6a9ea56ba17faf9c14448e66177e2c63f/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.163198 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27ea9905-46c5-48e1-a558-7c8e87a4cea7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.163961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27ea9905-46c5-48e1-a558-7c8e87a4cea7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.167341 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcd9\" (UniqueName: \"kubernetes.io/projected/27ea9905-46c5-48e1-a558-7c8e87a4cea7-kube-api-access-cwcd9\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.178015 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.179317 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.183140 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9trmd" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.183360 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.183559 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.183743 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.187729 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.202769 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.217798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0320727-988c-454c-ba8f-2cbf0ea1299b\") pod \"rabbitmq-server-0\" (UID: \"27ea9905-46c5-48e1-a558-7c8e87a4cea7\") " pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.264132 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342521 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/996212ae-f3a7-4f9d-ade6-6f82051b6561-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342643 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342720 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/996212ae-f3a7-4f9d-ade6-6f82051b6561-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342748 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342790 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/996212ae-f3a7-4f9d-ade6-6f82051b6561-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342837 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/996212ae-f3a7-4f9d-ade6-6f82051b6561-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.342862 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq82k\" (UniqueName: \"kubernetes.io/projected/996212ae-f3a7-4f9d-ade6-6f82051b6561-kube-api-access-pq82k\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.395712 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443797 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/996212ae-f3a7-4f9d-ade6-6f82051b6561-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq82k\" (UniqueName: \"kubernetes.io/projected/996212ae-f3a7-4f9d-ade6-6f82051b6561-kube-api-access-pq82k\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/996212ae-f3a7-4f9d-ade6-6f82051b6561-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443890 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443910 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.443988 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/996212ae-f3a7-4f9d-ade6-6f82051b6561-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.444007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.444036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/996212ae-f3a7-4f9d-ade6-6f82051b6561-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.444615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.444945 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.445735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/996212ae-f3a7-4f9d-ade6-6f82051b6561-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.446052 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/996212ae-f3a7-4f9d-ade6-6f82051b6561-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.449922 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/996212ae-f3a7-4f9d-ade6-6f82051b6561-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.450054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/996212ae-f3a7-4f9d-ade6-6f82051b6561-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.450154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/996212ae-f3a7-4f9d-ade6-6f82051b6561-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.453102 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68ddc8d76c-68844"] Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.453378 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerName="dnsmasq-dns" containerID="cri-o://e3a703ee4f6c231de386294d7b169fcb4d7ea059f599255a7c69f7050a48263a" gracePeriod=10 Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.457020 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.457071 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/33470f280f6e752ab832dd8abace1b4f365644972edc17da41d44b2954438cdf/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.464556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq82k\" (UniqueName: \"kubernetes.io/projected/996212ae-f3a7-4f9d-ade6-6f82051b6561-kube-api-access-pq82k\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.496844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc06831d-61e6-43ee-abb4-dc9ea8a14312\") pod \"rabbitmq-cell1-server-0\" (UID: \"996212ae-f3a7-4f9d-ade6-6f82051b6561\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.513029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.750977 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 08:46:50 crc kubenswrapper[4895]: W1206 08:46:50.756237 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ea9905_46c5_48e1_a558_7c8e87a4cea7.slice/crio-2541a1f843fad68bc263febc5c14597c97541062b4eb703e7d56a68a04ebd964 WatchSource:0}: Error finding container 2541a1f843fad68bc263febc5c14597c97541062b4eb703e7d56a68a04ebd964: Status 404 returned error can't find the container with id 2541a1f843fad68bc263febc5c14597c97541062b4eb703e7d56a68a04ebd964 Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.805911 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27ea9905-46c5-48e1-a558-7c8e87a4cea7","Type":"ContainerStarted","Data":"2541a1f843fad68bc263febc5c14597c97541062b4eb703e7d56a68a04ebd964"} Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.808858 4895 generic.go:334] "Generic (PLEG): container finished" podID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerID="e3a703ee4f6c231de386294d7b169fcb4d7ea059f599255a7c69f7050a48263a" exitCode=0 Dec 06 08:46:50 crc kubenswrapper[4895]: I1206 08:46:50.809021 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" event={"ID":"5dc55496-9124-4b1e-9d42-0856b061b58c","Type":"ContainerDied","Data":"e3a703ee4f6c231de386294d7b169fcb4d7ea059f599255a7c69f7050a48263a"} Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.001939 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.012306 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.156129 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-config\") pod \"5dc55496-9124-4b1e-9d42-0856b061b58c\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.156249 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-dns-svc\") pod \"5dc55496-9124-4b1e-9d42-0856b061b58c\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.156343 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4pn4\" (UniqueName: \"kubernetes.io/projected/5dc55496-9124-4b1e-9d42-0856b061b58c-kube-api-access-v4pn4\") pod \"5dc55496-9124-4b1e-9d42-0856b061b58c\" (UID: \"5dc55496-9124-4b1e-9d42-0856b061b58c\") " Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.160410 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc55496-9124-4b1e-9d42-0856b061b58c-kube-api-access-v4pn4" (OuterVolumeSpecName: "kube-api-access-v4pn4") pod "5dc55496-9124-4b1e-9d42-0856b061b58c" (UID: "5dc55496-9124-4b1e-9d42-0856b061b58c"). InnerVolumeSpecName "kube-api-access-v4pn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.190254 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-config" (OuterVolumeSpecName: "config") pod "5dc55496-9124-4b1e-9d42-0856b061b58c" (UID: "5dc55496-9124-4b1e-9d42-0856b061b58c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.199182 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5dc55496-9124-4b1e-9d42-0856b061b58c" (UID: "5dc55496-9124-4b1e-9d42-0856b061b58c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.259585 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.259621 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4pn4\" (UniqueName: \"kubernetes.io/projected/5dc55496-9124-4b1e-9d42-0856b061b58c-kube-api-access-v4pn4\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.259641 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc55496-9124-4b1e-9d42-0856b061b58c-config\") on node \"crc\" DevicePath \"\"" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.831787 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.831757 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68ddc8d76c-68844" event={"ID":"5dc55496-9124-4b1e-9d42-0856b061b58c","Type":"ContainerDied","Data":"f63b511449485e8154478efc9abf51453b0610090498e8191e95b69e971b7948"} Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.832966 4895 scope.go:117] "RemoveContainer" containerID="e3a703ee4f6c231de386294d7b169fcb4d7ea059f599255a7c69f7050a48263a" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.838521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"996212ae-f3a7-4f9d-ade6-6f82051b6561","Type":"ContainerStarted","Data":"cc89b6ec42cd58d861a5b325953ad69e6a14cbd724af97190e5f4787dedd9774"} Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.871431 4895 scope.go:117] "RemoveContainer" containerID="7a4fb925432adf5a1e9640dcdd3de1c0444c60f699be04361111c8243aaee4ec" Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.885294 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68ddc8d76c-68844"] Dec 06 08:46:51 crc kubenswrapper[4895]: I1206 08:46:51.890200 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68ddc8d76c-68844"] Dec 06 08:46:52 crc kubenswrapper[4895]: I1206 08:46:52.070513 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" path="/var/lib/kubelet/pods/5dc55496-9124-4b1e-9d42-0856b061b58c/volumes" Dec 06 08:46:52 crc kubenswrapper[4895]: I1206 08:46:52.071515 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d651b4-cf8f-4a0b-821e-26933be91b0a" path="/var/lib/kubelet/pods/a5d651b4-cf8f-4a0b-821e-26933be91b0a/volumes" Dec 06 08:46:52 crc kubenswrapper[4895]: I1206 08:46:52.851551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27ea9905-46c5-48e1-a558-7c8e87a4cea7","Type":"ContainerStarted","Data":"8c999c9339480b1a3e556df3478ff41b994e1628701e723234d15b15ce7ae71f"} Dec 06 08:46:52 crc kubenswrapper[4895]: I1206 08:46:52.858677 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"996212ae-f3a7-4f9d-ade6-6f82051b6561","Type":"ContainerStarted","Data":"2445ef48c3bd413fc8d16ebdf5965d912ad6b470c6aa2d45a1d55f7976d3de85"} Dec 06 08:46:57 crc kubenswrapper[4895]: I1206 08:46:57.050245 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:46:57 crc kubenswrapper[4895]: E1206 08:46:57.051074 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:47:10 crc kubenswrapper[4895]: I1206 08:47:10.051061 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:47:10 crc kubenswrapper[4895]: E1206 08:47:10.052006 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:47:12 crc kubenswrapper[4895]: I1206 08:47:12.977446 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2l4b7"] Dec 06 08:47:12 crc kubenswrapper[4895]: E1206 08:47:12.978353 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerName="init" Dec 06 08:47:12 crc kubenswrapper[4895]: I1206 08:47:12.978386 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerName="init" Dec 06 08:47:12 crc kubenswrapper[4895]: E1206 08:47:12.978410 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerName="dnsmasq-dns" Dec 06 08:47:12 crc kubenswrapper[4895]: I1206 08:47:12.978418 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerName="dnsmasq-dns" Dec 06 08:47:12 crc kubenswrapper[4895]: I1206 08:47:12.978834 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc55496-9124-4b1e-9d42-0856b061b58c" containerName="dnsmasq-dns" Dec 06 08:47:12 crc kubenswrapper[4895]: I1206 08:47:12.981036 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:12 crc kubenswrapper[4895]: I1206 08:47:12.990136 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2l4b7"] Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.121026 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-catalog-content\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.121111 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-utilities\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.121253 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lt7\" (UniqueName: \"kubernetes.io/projected/308a4a96-d48d-417e-959f-6d2e7e41169b-kube-api-access-b8lt7\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.222432 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-utilities\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.222576 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lt7\" (UniqueName: \"kubernetes.io/projected/308a4a96-d48d-417e-959f-6d2e7e41169b-kube-api-access-b8lt7\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.222650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-catalog-content\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.223086 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-utilities\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.223096 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-catalog-content\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.241827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lt7\" (UniqueName: \"kubernetes.io/projected/308a4a96-d48d-417e-959f-6d2e7e41169b-kube-api-access-b8lt7\") pod \"community-operators-2l4b7\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.300817 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:13 crc kubenswrapper[4895]: I1206 08:47:13.887618 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2l4b7"] Dec 06 08:47:13 crc kubenswrapper[4895]: W1206 08:47:13.893716 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod308a4a96_d48d_417e_959f_6d2e7e41169b.slice/crio-3f340d00c7fb493225723d1703a5dc27ea8efbfb4d0e8d36c32f3fc53778b2a8 WatchSource:0}: Error finding container 3f340d00c7fb493225723d1703a5dc27ea8efbfb4d0e8d36c32f3fc53778b2a8: Status 404 returned error can't find the container with id 3f340d00c7fb493225723d1703a5dc27ea8efbfb4d0e8d36c32f3fc53778b2a8 Dec 06 08:47:14 crc kubenswrapper[4895]: I1206 08:47:14.024028 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerStarted","Data":"3f340d00c7fb493225723d1703a5dc27ea8efbfb4d0e8d36c32f3fc53778b2a8"} Dec 06 08:47:15 crc kubenswrapper[4895]: I1206 08:47:15.032233 4895 generic.go:334] "Generic (PLEG): container finished" podID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerID="f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7" exitCode=0 Dec 06 08:47:15 crc kubenswrapper[4895]: I1206 08:47:15.032337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerDied","Data":"f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7"} Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.041447 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerStarted","Data":"2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2"} Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.173464 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lr229"] Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.178729 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.185206 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lr229"] Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.268127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-utilities\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.268196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqz9\" (UniqueName: \"kubernetes.io/projected/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-kube-api-access-7gqz9\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.268222 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-catalog-content\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.369195 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-catalog-content\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.369327 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-utilities\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.369377 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqz9\" (UniqueName: \"kubernetes.io/projected/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-kube-api-access-7gqz9\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.370131 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-catalog-content\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.370154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-utilities\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.389092 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqz9\" (UniqueName: \"kubernetes.io/projected/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-kube-api-access-7gqz9\") pod \"redhat-operators-lr229\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.551260 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:16 crc kubenswrapper[4895]: I1206 08:47:16.972273 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lr229"] Dec 06 08:47:16 crc kubenswrapper[4895]: W1206 08:47:16.975135 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef7274e1_6476_423a_a8b7_a2eb1a416bcd.slice/crio-1d3490b9966a965e59b5838b08ca6a3922fd7c6e4609cc8613377f00104f1a5a WatchSource:0}: Error finding container 1d3490b9966a965e59b5838b08ca6a3922fd7c6e4609cc8613377f00104f1a5a: Status 404 returned error can't find the container with id 1d3490b9966a965e59b5838b08ca6a3922fd7c6e4609cc8613377f00104f1a5a Dec 06 08:47:17 crc kubenswrapper[4895]: I1206 08:47:17.049373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr229" event={"ID":"ef7274e1-6476-423a-a8b7-a2eb1a416bcd","Type":"ContainerStarted","Data":"1d3490b9966a965e59b5838b08ca6a3922fd7c6e4609cc8613377f00104f1a5a"} Dec 06 08:47:17 crc kubenswrapper[4895]: I1206 08:47:17.051623 4895 generic.go:334] "Generic (PLEG): container finished" podID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerID="2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2" exitCode=0 Dec 06 08:47:17 crc kubenswrapper[4895]: I1206 08:47:17.051793 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerDied","Data":"2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2"} Dec 06 08:47:18 crc kubenswrapper[4895]: I1206 08:47:18.065192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerStarted","Data":"96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871"} Dec 06 08:47:18 crc kubenswrapper[4895]: I1206 08:47:18.067735 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerID="e0ef9994d87d9e2f77d04d5ef9265a6b8bf69458f02bbeeb7e0573efda8e5b2d" exitCode=0 Dec 06 08:47:18 crc kubenswrapper[4895]: I1206 08:47:18.067805 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr229" event={"ID":"ef7274e1-6476-423a-a8b7-a2eb1a416bcd","Type":"ContainerDied","Data":"e0ef9994d87d9e2f77d04d5ef9265a6b8bf69458f02bbeeb7e0573efda8e5b2d"} Dec 06 08:47:18 crc kubenswrapper[4895]: I1206 08:47:18.115138 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2l4b7" podStartSLOduration=3.703824726 podStartE2EDuration="6.115093204s" podCreationTimestamp="2025-12-06 08:47:12 +0000 UTC" firstStartedPulling="2025-12-06 08:47:15.033679627 +0000 UTC m=+6597.435068497" lastFinishedPulling="2025-12-06 08:47:17.444948105 +0000 UTC m=+6599.846336975" observedRunningTime="2025-12-06 08:47:18.111120408 +0000 UTC m=+6600.512509278" watchObservedRunningTime="2025-12-06 08:47:18.115093204 +0000 UTC m=+6600.516482074" Dec 06 08:47:20 crc kubenswrapper[4895]: I1206 08:47:20.082860 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerID="3024bc9e7a5dc97d935f92ae5102e4f135445cc55c52270e98f1de8aec9d6b1f" exitCode=0 Dec 06 08:47:20 crc kubenswrapper[4895]: I1206 08:47:20.082975 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr229" event={"ID":"ef7274e1-6476-423a-a8b7-a2eb1a416bcd","Type":"ContainerDied","Data":"3024bc9e7a5dc97d935f92ae5102e4f135445cc55c52270e98f1de8aec9d6b1f"} Dec 06 08:47:21 crc kubenswrapper[4895]: I1206 08:47:21.091913 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr229" event={"ID":"ef7274e1-6476-423a-a8b7-a2eb1a416bcd","Type":"ContainerStarted","Data":"93fffec0fa2791358b129262c8587ded277faec0f7e3088366311d73e57f0966"} Dec 06 08:47:21 crc kubenswrapper[4895]: I1206 08:47:21.115707 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lr229" podStartSLOduration=2.608349382 podStartE2EDuration="5.115687045s" podCreationTimestamp="2025-12-06 08:47:16 +0000 UTC" firstStartedPulling="2025-12-06 08:47:18.069688677 +0000 UTC m=+6600.471077547" lastFinishedPulling="2025-12-06 08:47:20.57702633 +0000 UTC m=+6602.978415210" observedRunningTime="2025-12-06 08:47:21.111893593 +0000 UTC m=+6603.513282473" watchObservedRunningTime="2025-12-06 08:47:21.115687045 +0000 UTC m=+6603.517075915" Dec 06 08:47:23 crc kubenswrapper[4895]: I1206 08:47:23.301857 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:23 crc kubenswrapper[4895]: I1206 08:47:23.301917 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:23 crc kubenswrapper[4895]: I1206 08:47:23.340975 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:24 crc kubenswrapper[4895]: I1206 08:47:24.179825 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:25 crc kubenswrapper[4895]: I1206 08:47:25.050501 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:47:25 crc kubenswrapper[4895]: E1206 08:47:25.050734 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:47:26 crc kubenswrapper[4895]: E1206 08:47:26.415973 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ea9905_46c5_48e1_a558_7c8e87a4cea7.slice/crio-conmon-8c999c9339480b1a3e556df3478ff41b994e1628701e723234d15b15ce7ae71f.scope\": RecentStats: unable to find data in memory cache]" Dec 06 08:47:26 crc kubenswrapper[4895]: I1206 08:47:26.552294 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:26 crc kubenswrapper[4895]: I1206 08:47:26.554164 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:26 crc kubenswrapper[4895]: I1206 08:47:26.617123 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:27 crc kubenswrapper[4895]: I1206 08:47:27.136302 4895 generic.go:334] "Generic (PLEG): container finished" podID="27ea9905-46c5-48e1-a558-7c8e87a4cea7" containerID="8c999c9339480b1a3e556df3478ff41b994e1628701e723234d15b15ce7ae71f" exitCode=0 Dec 06 08:47:27 crc kubenswrapper[4895]: I1206 08:47:27.136380 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27ea9905-46c5-48e1-a558-7c8e87a4cea7","Type":"ContainerDied","Data":"8c999c9339480b1a3e556df3478ff41b994e1628701e723234d15b15ce7ae71f"} Dec 06 08:47:27 crc kubenswrapper[4895]: I1206 08:47:27.137771 4895 generic.go:334] "Generic (PLEG): container finished" podID="996212ae-f3a7-4f9d-ade6-6f82051b6561" containerID="2445ef48c3bd413fc8d16ebdf5965d912ad6b470c6aa2d45a1d55f7976d3de85" exitCode=0 Dec 06 08:47:27 crc kubenswrapper[4895]: I1206 08:47:27.137804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"996212ae-f3a7-4f9d-ade6-6f82051b6561","Type":"ContainerDied","Data":"2445ef48c3bd413fc8d16ebdf5965d912ad6b470c6aa2d45a1d55f7976d3de85"} Dec 06 08:47:27 crc kubenswrapper[4895]: I1206 08:47:27.192705 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.147865 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27ea9905-46c5-48e1-a558-7c8e87a4cea7","Type":"ContainerStarted","Data":"c38f9b34ccb6c55d135b017dd5499709317ff71b339226b3e10e3e3002cdda30"} Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.152488 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.154306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"996212ae-f3a7-4f9d-ade6-6f82051b6561","Type":"ContainerStarted","Data":"59bc22a1ad58bc3ce10a6a55875981f4cbc7834b792a6b399fbe66fdbd7e9c81"} Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.154850 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.158165 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2l4b7"] Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.158369 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2l4b7" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="registry-server" containerID="cri-o://96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871" gracePeriod=2 Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.190959 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.190936815 podStartE2EDuration="39.190936815s" podCreationTimestamp="2025-12-06 08:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:47:28.181504043 +0000 UTC m=+6610.582892913" watchObservedRunningTime="2025-12-06 08:47:28.190936815 +0000 UTC m=+6610.592325695" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.210264 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.210250813 podStartE2EDuration="38.210250813s" podCreationTimestamp="2025-12-06 08:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:47:28.205099216 +0000 UTC m=+6610.606488096" watchObservedRunningTime="2025-12-06 08:47:28.210250813 +0000 UTC m=+6610.611639693" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.539232 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.583750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-catalog-content\") pod \"308a4a96-d48d-417e-959f-6d2e7e41169b\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.583809 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8lt7\" (UniqueName: \"kubernetes.io/projected/308a4a96-d48d-417e-959f-6d2e7e41169b-kube-api-access-b8lt7\") pod \"308a4a96-d48d-417e-959f-6d2e7e41169b\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.583859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-utilities\") pod \"308a4a96-d48d-417e-959f-6d2e7e41169b\" (UID: \"308a4a96-d48d-417e-959f-6d2e7e41169b\") " Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.585094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-utilities" (OuterVolumeSpecName: "utilities") pod "308a4a96-d48d-417e-959f-6d2e7e41169b" (UID: "308a4a96-d48d-417e-959f-6d2e7e41169b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.588693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308a4a96-d48d-417e-959f-6d2e7e41169b-kube-api-access-b8lt7" (OuterVolumeSpecName: "kube-api-access-b8lt7") pod "308a4a96-d48d-417e-959f-6d2e7e41169b" (UID: "308a4a96-d48d-417e-959f-6d2e7e41169b"). InnerVolumeSpecName "kube-api-access-b8lt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.638446 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "308a4a96-d48d-417e-959f-6d2e7e41169b" (UID: "308a4a96-d48d-417e-959f-6d2e7e41169b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.686276 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.686608 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8lt7\" (UniqueName: \"kubernetes.io/projected/308a4a96-d48d-417e-959f-6d2e7e41169b-kube-api-access-b8lt7\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:28 crc kubenswrapper[4895]: I1206 08:47:28.686700 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308a4a96-d48d-417e-959f-6d2e7e41169b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.161972 4895 generic.go:334] "Generic (PLEG): container finished" podID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerID="96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871" exitCode=0 Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.162026 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4b7" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.162043 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerDied","Data":"96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871"} Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.162356 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4b7" event={"ID":"308a4a96-d48d-417e-959f-6d2e7e41169b","Type":"ContainerDied","Data":"3f340d00c7fb493225723d1703a5dc27ea8efbfb4d0e8d36c32f3fc53778b2a8"} Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.162372 4895 scope.go:117] "RemoveContainer" containerID="96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.192357 4895 scope.go:117] "RemoveContainer" containerID="2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.200623 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2l4b7"] Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.210975 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2l4b7"] Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.219603 4895 scope.go:117] "RemoveContainer" containerID="f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.250447 4895 scope.go:117] "RemoveContainer" containerID="96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871" Dec 06 08:47:29 crc kubenswrapper[4895]: E1206 08:47:29.250892 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871\": container with ID starting with 96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871 not found: ID does not exist" containerID="96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.250950 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871"} err="failed to get container status \"96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871\": rpc error: code = NotFound desc = could not find container \"96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871\": container with ID starting with 96137fc87f43e7c29a6a5725ca745ac452c8d5a13a17f1983d8ab6c21708c871 not found: ID does not exist" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.250980 4895 scope.go:117] "RemoveContainer" containerID="2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2" Dec 06 08:47:29 crc kubenswrapper[4895]: E1206 08:47:29.251390 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2\": container with ID starting with 2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2 not found: ID does not exist" containerID="2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.251431 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2"} err="failed to get container status \"2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2\": rpc error: code = NotFound desc = could not find container \"2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2\": container with ID starting with 2edc0bf3a9e52cdb7001cc70d1fa65d35c3e094699e4e748100a4afa0bb20ad2 not found: ID does not exist" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.251449 4895 scope.go:117] "RemoveContainer" containerID="f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7" Dec 06 08:47:29 crc kubenswrapper[4895]: E1206 08:47:29.251720 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7\": container with ID starting with f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7 not found: ID does not exist" containerID="f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7" Dec 06 08:47:29 crc kubenswrapper[4895]: I1206 08:47:29.251767 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7"} err="failed to get container status \"f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7\": rpc error: code = NotFound desc = could not find container \"f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7\": container with ID starting with f90b130272c124607e7f33eedbb73e0c784ae09137c3bd375911db2db7a0faf7 not found: ID does not exist" Dec 06 08:47:30 crc kubenswrapper[4895]: I1206 08:47:30.061751 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" path="/var/lib/kubelet/pods/308a4a96-d48d-417e-959f-6d2e7e41169b/volumes" Dec 06 08:47:31 crc kubenswrapper[4895]: I1206 08:47:31.757035 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lr229"] Dec 06 08:47:31 crc kubenswrapper[4895]: I1206 08:47:31.758390 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lr229" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="registry-server" containerID="cri-o://93fffec0fa2791358b129262c8587ded277faec0f7e3088366311d73e57f0966" gracePeriod=2 Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.188300 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerID="93fffec0fa2791358b129262c8587ded277faec0f7e3088366311d73e57f0966" exitCode=0 Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.188337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr229" event={"ID":"ef7274e1-6476-423a-a8b7-a2eb1a416bcd","Type":"ContainerDied","Data":"93fffec0fa2791358b129262c8587ded277faec0f7e3088366311d73e57f0966"} Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.188735 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr229" event={"ID":"ef7274e1-6476-423a-a8b7-a2eb1a416bcd","Type":"ContainerDied","Data":"1d3490b9966a965e59b5838b08ca6a3922fd7c6e4609cc8613377f00104f1a5a"} Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.188749 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3490b9966a965e59b5838b08ca6a3922fd7c6e4609cc8613377f00104f1a5a" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.192018 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.251008 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gqz9\" (UniqueName: \"kubernetes.io/projected/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-kube-api-access-7gqz9\") pod \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.251072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-utilities\") pod \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.251099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-catalog-content\") pod \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\" (UID: \"ef7274e1-6476-423a-a8b7-a2eb1a416bcd\") " Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.259504 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-utilities" (OuterVolumeSpecName: "utilities") pod "ef7274e1-6476-423a-a8b7-a2eb1a416bcd" (UID: "ef7274e1-6476-423a-a8b7-a2eb1a416bcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.264908 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-kube-api-access-7gqz9" (OuterVolumeSpecName: "kube-api-access-7gqz9") pod "ef7274e1-6476-423a-a8b7-a2eb1a416bcd" (UID: "ef7274e1-6476-423a-a8b7-a2eb1a416bcd"). InnerVolumeSpecName "kube-api-access-7gqz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.353339 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gqz9\" (UniqueName: \"kubernetes.io/projected/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-kube-api-access-7gqz9\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.353407 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.364017 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef7274e1-6476-423a-a8b7-a2eb1a416bcd" (UID: "ef7274e1-6476-423a-a8b7-a2eb1a416bcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:47:32 crc kubenswrapper[4895]: I1206 08:47:32.455380 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7274e1-6476-423a-a8b7-a2eb1a416bcd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:33 crc kubenswrapper[4895]: I1206 08:47:33.198282 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr229" Dec 06 08:47:33 crc kubenswrapper[4895]: I1206 08:47:33.245090 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lr229"] Dec 06 08:47:33 crc kubenswrapper[4895]: I1206 08:47:33.250375 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lr229"] Dec 06 08:47:34 crc kubenswrapper[4895]: I1206 08:47:34.064019 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" path="/var/lib/kubelet/pods/ef7274e1-6476-423a-a8b7-a2eb1a416bcd/volumes" Dec 06 08:47:36 crc kubenswrapper[4895]: I1206 08:47:36.050943 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:47:36 crc kubenswrapper[4895]: E1206 08:47:36.051437 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:47:40 crc kubenswrapper[4895]: I1206 08:47:40.267679 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 08:47:40 crc kubenswrapper[4895]: I1206 08:47:40.515508 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.389410 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5jr78"] Dec 06 08:47:46 crc kubenswrapper[4895]: E1206 08:47:46.390449 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="extract-utilities" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390484 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="extract-utilities" Dec 06 08:47:46 crc kubenswrapper[4895]: E1206 08:47:46.390506 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="registry-server" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390515 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="registry-server" Dec 06 08:47:46 crc kubenswrapper[4895]: E1206 08:47:46.390541 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="extract-content" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390551 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="extract-content" Dec 06 08:47:46 crc kubenswrapper[4895]: E1206 08:47:46.390573 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="extract-utilities" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390581 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="extract-utilities" Dec 06 08:47:46 crc kubenswrapper[4895]: E1206 08:47:46.390596 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="registry-server" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390604 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="registry-server" Dec 06 08:47:46 crc kubenswrapper[4895]: E1206 08:47:46.390619 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="extract-content" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390627 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="extract-content" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390832 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7274e1-6476-423a-a8b7-a2eb1a416bcd" containerName="registry-server" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.390855 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="308a4a96-d48d-417e-959f-6d2e7e41169b" containerName="registry-server" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.392359 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.396436 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jr78"] Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.508151 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l5b6\" (UniqueName: \"kubernetes.io/projected/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-kube-api-access-9l5b6\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.508229 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-utilities\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.508365 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-catalog-content\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.609693 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-catalog-content\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.609780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l5b6\" (UniqueName: \"kubernetes.io/projected/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-kube-api-access-9l5b6\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.609824 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-utilities\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.610414 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-utilities\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.610724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-catalog-content\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.630654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l5b6\" (UniqueName: \"kubernetes.io/projected/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-kube-api-access-9l5b6\") pod \"redhat-marketplace-5jr78\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:46 crc kubenswrapper[4895]: I1206 08:47:46.715322 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:47 crc kubenswrapper[4895]: I1206 08:47:47.145629 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jr78"] Dec 06 08:47:47 crc kubenswrapper[4895]: I1206 08:47:47.321682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerStarted","Data":"87aa1c5a294a101e66c282cf0fc875fde5b27eb80ef36a3330ee92d989fcdbb1"} Dec 06 08:47:48 crc kubenswrapper[4895]: I1206 08:47:48.346945 4895 generic.go:334] "Generic (PLEG): container finished" podID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerID="b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d" exitCode=0 Dec 06 08:47:48 crc kubenswrapper[4895]: I1206 08:47:48.347028 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerDied","Data":"b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d"} Dec 06 08:47:49 crc kubenswrapper[4895]: I1206 08:47:49.358002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerStarted","Data":"15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df"} Dec 06 08:47:50 crc kubenswrapper[4895]: I1206 08:47:50.373250 4895 generic.go:334] "Generic (PLEG): container finished" podID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerID="15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df" exitCode=0 Dec 06 08:47:50 crc kubenswrapper[4895]: I1206 08:47:50.373355 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerDied","Data":"15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df"} Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.050895 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:47:51 crc kubenswrapper[4895]: E1206 08:47:51.051354 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.383275 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerStarted","Data":"a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46"} Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.407390 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5jr78" podStartSLOduration=3.00500753 podStartE2EDuration="5.407370499s" podCreationTimestamp="2025-12-06 08:47:46 +0000 UTC" firstStartedPulling="2025-12-06 08:47:48.349897793 +0000 UTC m=+6630.751286673" lastFinishedPulling="2025-12-06 08:47:50.752260732 +0000 UTC m=+6633.153649642" observedRunningTime="2025-12-06 08:47:51.402255392 +0000 UTC m=+6633.803644282" watchObservedRunningTime="2025-12-06 08:47:51.407370499 +0000 UTC m=+6633.808759369" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.656844 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.657982 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.660786 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-z2v77" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.664046 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.803870 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2gg\" (UniqueName: \"kubernetes.io/projected/98b25ab0-c165-46aa-b5ab-d590a5cffea0-kube-api-access-tb2gg\") pod \"mariadb-client-1-default\" (UID: \"98b25ab0-c165-46aa-b5ab-d590a5cffea0\") " pod="openstack/mariadb-client-1-default" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.905378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2gg\" (UniqueName: \"kubernetes.io/projected/98b25ab0-c165-46aa-b5ab-d590a5cffea0-kube-api-access-tb2gg\") pod \"mariadb-client-1-default\" (UID: \"98b25ab0-c165-46aa-b5ab-d590a5cffea0\") " pod="openstack/mariadb-client-1-default" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.929770 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2gg\" (UniqueName: \"kubernetes.io/projected/98b25ab0-c165-46aa-b5ab-d590a5cffea0-kube-api-access-tb2gg\") pod \"mariadb-client-1-default\" (UID: \"98b25ab0-c165-46aa-b5ab-d590a5cffea0\") " pod="openstack/mariadb-client-1-default" Dec 06 08:47:51 crc kubenswrapper[4895]: I1206 08:47:51.984828 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 06 08:47:52 crc kubenswrapper[4895]: I1206 08:47:52.501613 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 06 08:47:53 crc kubenswrapper[4895]: I1206 08:47:53.401703 4895 generic.go:334] "Generic (PLEG): container finished" podID="98b25ab0-c165-46aa-b5ab-d590a5cffea0" containerID="2b68e19433c04b0b39656404c7be8d70d7c49f6356a53ad19e5d7b035e9c4de8" exitCode=0 Dec 06 08:47:53 crc kubenswrapper[4895]: I1206 08:47:53.401794 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"98b25ab0-c165-46aa-b5ab-d590a5cffea0","Type":"ContainerDied","Data":"2b68e19433c04b0b39656404c7be8d70d7c49f6356a53ad19e5d7b035e9c4de8"} Dec 06 08:47:53 crc kubenswrapper[4895]: I1206 08:47:53.402100 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"98b25ab0-c165-46aa-b5ab-d590a5cffea0","Type":"ContainerStarted","Data":"7d79a12115664b0e2a3644dec9ea0fdefed02ff4ce1221a850585a8cd4d6d94d"} Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.776761 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.803584 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_98b25ab0-c165-46aa-b5ab-d590a5cffea0/mariadb-client-1-default/0.log" Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.827214 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.832836 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.849998 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb2gg\" (UniqueName: \"kubernetes.io/projected/98b25ab0-c165-46aa-b5ab-d590a5cffea0-kube-api-access-tb2gg\") pod \"98b25ab0-c165-46aa-b5ab-d590a5cffea0\" (UID: \"98b25ab0-c165-46aa-b5ab-d590a5cffea0\") " Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.855400 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b25ab0-c165-46aa-b5ab-d590a5cffea0-kube-api-access-tb2gg" (OuterVolumeSpecName: "kube-api-access-tb2gg") pod "98b25ab0-c165-46aa-b5ab-d590a5cffea0" (UID: "98b25ab0-c165-46aa-b5ab-d590a5cffea0"). InnerVolumeSpecName "kube-api-access-tb2gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:47:54 crc kubenswrapper[4895]: I1206 08:47:54.951836 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb2gg\" (UniqueName: \"kubernetes.io/projected/98b25ab0-c165-46aa-b5ab-d590a5cffea0-kube-api-access-tb2gg\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.220819 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Dec 06 08:47:55 crc kubenswrapper[4895]: E1206 08:47:55.221189 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b25ab0-c165-46aa-b5ab-d590a5cffea0" containerName="mariadb-client-1-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.221211 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b25ab0-c165-46aa-b5ab-d590a5cffea0" containerName="mariadb-client-1-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.221446 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b25ab0-c165-46aa-b5ab-d590a5cffea0" containerName="mariadb-client-1-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.222129 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.228428 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.358858 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4vq\" (UniqueName: \"kubernetes.io/projected/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719-kube-api-access-lk4vq\") pod \"mariadb-client-2-default\" (UID: \"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719\") " pod="openstack/mariadb-client-2-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.422859 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d79a12115664b0e2a3644dec9ea0fdefed02ff4ce1221a850585a8cd4d6d94d" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.423007 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.461032 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4vq\" (UniqueName: \"kubernetes.io/projected/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719-kube-api-access-lk4vq\") pod \"mariadb-client-2-default\" (UID: \"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719\") " pod="openstack/mariadb-client-2-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.482330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4vq\" (UniqueName: \"kubernetes.io/projected/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719-kube-api-access-lk4vq\") pod \"mariadb-client-2-default\" (UID: \"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719\") " pod="openstack/mariadb-client-2-default" Dec 06 08:47:55 crc kubenswrapper[4895]: I1206 08:47:55.548441 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.037052 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 06 08:47:56 crc kubenswrapper[4895]: W1206 08:47:56.042771 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod866dbc7b_6d5c_4d8e_8e67_b2a643d8c719.slice/crio-a05565d1297a3aa098faea0efcbea15e616ee1d6d8042e8c00af65b12db531e3 WatchSource:0}: Error finding container a05565d1297a3aa098faea0efcbea15e616ee1d6d8042e8c00af65b12db531e3: Status 404 returned error can't find the container with id a05565d1297a3aa098faea0efcbea15e616ee1d6d8042e8c00af65b12db531e3 Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.063930 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b25ab0-c165-46aa-b5ab-d590a5cffea0" path="/var/lib/kubelet/pods/98b25ab0-c165-46aa-b5ab-d590a5cffea0/volumes" Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.436054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719","Type":"ContainerStarted","Data":"4fb7e6fc00a474c05f0e25c04d5db576d8adab024271f36757976e73c09b8d07"} Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.436754 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719","Type":"ContainerStarted","Data":"a05565d1297a3aa098faea0efcbea15e616ee1d6d8042e8c00af65b12db531e3"} Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.454867 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.454840116 podStartE2EDuration="1.454840116s" podCreationTimestamp="2025-12-06 08:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:47:56.452695788 +0000 UTC m=+6638.854084698" watchObservedRunningTime="2025-12-06 08:47:56.454840116 +0000 UTC m=+6638.856229026" Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.715536 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.716101 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:56 crc kubenswrapper[4895]: I1206 08:47:56.768483 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:57 crc kubenswrapper[4895]: I1206 08:47:57.453684 4895 generic.go:334] "Generic (PLEG): container finished" podID="866dbc7b-6d5c-4d8e-8e67-b2a643d8c719" containerID="4fb7e6fc00a474c05f0e25c04d5db576d8adab024271f36757976e73c09b8d07" exitCode=1 Dec 06 08:47:57 crc kubenswrapper[4895]: I1206 08:47:57.453793 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719","Type":"ContainerDied","Data":"4fb7e6fc00a474c05f0e25c04d5db576d8adab024271f36757976e73c09b8d07"} Dec 06 08:47:57 crc kubenswrapper[4895]: I1206 08:47:57.530829 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:47:57 crc kubenswrapper[4895]: I1206 08:47:57.604068 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jr78"] Dec 06 08:47:58 crc kubenswrapper[4895]: I1206 08:47:58.801959 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 06 08:47:58 crc kubenswrapper[4895]: I1206 08:47:58.838937 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 06 08:47:58 crc kubenswrapper[4895]: I1206 08:47:58.844738 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 06 08:47:58 crc kubenswrapper[4895]: I1206 08:47:58.917379 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk4vq\" (UniqueName: \"kubernetes.io/projected/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719-kube-api-access-lk4vq\") pod \"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719\" (UID: \"866dbc7b-6d5c-4d8e-8e67-b2a643d8c719\") " Dec 06 08:47:58 crc kubenswrapper[4895]: I1206 08:47:58.925555 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719-kube-api-access-lk4vq" (OuterVolumeSpecName: "kube-api-access-lk4vq") pod "866dbc7b-6d5c-4d8e-8e67-b2a643d8c719" (UID: "866dbc7b-6d5c-4d8e-8e67-b2a643d8c719"). InnerVolumeSpecName "kube-api-access-lk4vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.019294 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk4vq\" (UniqueName: \"kubernetes.io/projected/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719-kube-api-access-lk4vq\") on node \"crc\" DevicePath \"\"" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.210483 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Dec 06 08:47:59 crc kubenswrapper[4895]: E1206 08:47:59.210995 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866dbc7b-6d5c-4d8e-8e67-b2a643d8c719" containerName="mariadb-client-2-default" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.211017 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="866dbc7b-6d5c-4d8e-8e67-b2a643d8c719" containerName="mariadb-client-2-default" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.211218 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="866dbc7b-6d5c-4d8e-8e67-b2a643d8c719" containerName="mariadb-client-2-default" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.211912 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.217745 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.329155 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhzl\" (UniqueName: \"kubernetes.io/projected/c4ba581b-bb21-4be3-939d-cdab3cec3332-kube-api-access-qqhzl\") pod \"mariadb-client-1\" (UID: \"c4ba581b-bb21-4be3-939d-cdab3cec3332\") " pod="openstack/mariadb-client-1" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.431009 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhzl\" (UniqueName: \"kubernetes.io/projected/c4ba581b-bb21-4be3-939d-cdab3cec3332-kube-api-access-qqhzl\") pod \"mariadb-client-1\" (UID: \"c4ba581b-bb21-4be3-939d-cdab3cec3332\") " pod="openstack/mariadb-client-1" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.452249 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhzl\" (UniqueName: \"kubernetes.io/projected/c4ba581b-bb21-4be3-939d-cdab3cec3332-kube-api-access-qqhzl\") pod \"mariadb-client-1\" (UID: \"c4ba581b-bb21-4be3-939d-cdab3cec3332\") " pod="openstack/mariadb-client-1" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.472639 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.472680 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a05565d1297a3aa098faea0efcbea15e616ee1d6d8042e8c00af65b12db531e3" Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.472753 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5jr78" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="registry-server" containerID="cri-o://a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46" gracePeriod=2 Dec 06 08:47:59 crc kubenswrapper[4895]: I1206 08:47:59.534249 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 06 08:48:00 crc kubenswrapper[4895]: W1206 08:48:00.028188 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4ba581b_bb21_4be3_939d_cdab3cec3332.slice/crio-7cdf39101343b117549205b3bce621f32dad171bb1a2315c6803b85e1f35f7fe WatchSource:0}: Error finding container 7cdf39101343b117549205b3bce621f32dad171bb1a2315c6803b85e1f35f7fe: Status 404 returned error can't find the container with id 7cdf39101343b117549205b3bce621f32dad171bb1a2315c6803b85e1f35f7fe Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.028464 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.061443 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866dbc7b-6d5c-4d8e-8e67-b2a643d8c719" path="/var/lib/kubelet/pods/866dbc7b-6d5c-4d8e-8e67-b2a643d8c719/volumes" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.476545 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.482840 4895 generic.go:334] "Generic (PLEG): container finished" podID="c4ba581b-bb21-4be3-939d-cdab3cec3332" containerID="d08959219f7daa763978a785fc65cce9e2d9eaa26072d90f6f94e9f0d7e33ccb" exitCode=0 Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.482922 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"c4ba581b-bb21-4be3-939d-cdab3cec3332","Type":"ContainerDied","Data":"d08959219f7daa763978a785fc65cce9e2d9eaa26072d90f6f94e9f0d7e33ccb"} Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.482955 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"c4ba581b-bb21-4be3-939d-cdab3cec3332","Type":"ContainerStarted","Data":"7cdf39101343b117549205b3bce621f32dad171bb1a2315c6803b85e1f35f7fe"} Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.485242 4895 generic.go:334] "Generic (PLEG): container finished" podID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerID="a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46" exitCode=0 Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.485267 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerDied","Data":"a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46"} Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.485285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jr78" event={"ID":"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4","Type":"ContainerDied","Data":"87aa1c5a294a101e66c282cf0fc875fde5b27eb80ef36a3330ee92d989fcdbb1"} Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.485305 4895 scope.go:117] "RemoveContainer" containerID="a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.485430 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jr78" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.516398 4895 scope.go:117] "RemoveContainer" containerID="15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.536188 4895 scope.go:117] "RemoveContainer" containerID="b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.550946 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-utilities\") pod \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.552023 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-utilities" (OuterVolumeSpecName: "utilities") pod "ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" (UID: "ed6b1a35-62b4-492e-86e4-b2b7c15be0b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.552422 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l5b6\" (UniqueName: \"kubernetes.io/projected/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-kube-api-access-9l5b6\") pod \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.552551 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-catalog-content\") pod \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\" (UID: \"ed6b1a35-62b4-492e-86e4-b2b7c15be0b4\") " Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.552931 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.558495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-kube-api-access-9l5b6" (OuterVolumeSpecName: "kube-api-access-9l5b6") pod "ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" (UID: "ed6b1a35-62b4-492e-86e4-b2b7c15be0b4"). InnerVolumeSpecName "kube-api-access-9l5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.560966 4895 scope.go:117] "RemoveContainer" containerID="a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46" Dec 06 08:48:00 crc kubenswrapper[4895]: E1206 08:48:00.561631 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46\": container with ID starting with a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46 not found: ID does not exist" containerID="a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.561706 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46"} err="failed to get container status \"a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46\": rpc error: code = NotFound desc = could not find container \"a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46\": container with ID starting with a6193666c50f3a172b7b24c853afce7fcb52a44f2462c6584c24206737aa9c46 not found: ID does not exist" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.561736 4895 scope.go:117] "RemoveContainer" containerID="15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df" Dec 06 08:48:00 crc kubenswrapper[4895]: E1206 08:48:00.562360 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df\": container with ID starting with 15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df not found: ID does not exist" containerID="15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.562388 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df"} err="failed to get container status \"15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df\": rpc error: code = NotFound desc = could not find container \"15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df\": container with ID starting with 15e62f58f3730548133a06033f89c16a9c4bfa806ed056910fe892e57a9765df not found: ID does not exist" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.562408 4895 scope.go:117] "RemoveContainer" containerID="b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d" Dec 06 08:48:00 crc kubenswrapper[4895]: E1206 08:48:00.562954 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d\": container with ID starting with b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d not found: ID does not exist" containerID="b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.562979 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d"} err="failed to get container status \"b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d\": rpc error: code = NotFound desc = could not find container \"b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d\": container with ID starting with b2c14d5d752f45a8429d811ebabb81daf8f113c534daa2e7ba0809ce5cddf39d not found: ID does not exist" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.573421 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" (UID: "ed6b1a35-62b4-492e-86e4-b2b7c15be0b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.654734 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l5b6\" (UniqueName: \"kubernetes.io/projected/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-kube-api-access-9l5b6\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.655151 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.823016 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jr78"] Dec 06 08:48:00 crc kubenswrapper[4895]: I1206 08:48:00.830023 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jr78"] Dec 06 08:48:01 crc kubenswrapper[4895]: I1206 08:48:01.842031 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 06 08:48:01 crc kubenswrapper[4895]: I1206 08:48:01.859115 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_c4ba581b-bb21-4be3-939d-cdab3cec3332/mariadb-client-1/0.log" Dec 06 08:48:01 crc kubenswrapper[4895]: I1206 08:48:01.884315 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Dec 06 08:48:01 crc kubenswrapper[4895]: I1206 08:48:01.889785 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Dec 06 08:48:01 crc kubenswrapper[4895]: I1206 08:48:01.973797 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhzl\" (UniqueName: \"kubernetes.io/projected/c4ba581b-bb21-4be3-939d-cdab3cec3332-kube-api-access-qqhzl\") pod \"c4ba581b-bb21-4be3-939d-cdab3cec3332\" (UID: \"c4ba581b-bb21-4be3-939d-cdab3cec3332\") " Dec 06 08:48:01 crc kubenswrapper[4895]: I1206 08:48:01.979089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ba581b-bb21-4be3-939d-cdab3cec3332-kube-api-access-qqhzl" (OuterVolumeSpecName: "kube-api-access-qqhzl") pod "c4ba581b-bb21-4be3-939d-cdab3cec3332" (UID: "c4ba581b-bb21-4be3-939d-cdab3cec3332"). InnerVolumeSpecName "kube-api-access-qqhzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.062190 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ba581b-bb21-4be3-939d-cdab3cec3332" path="/var/lib/kubelet/pods/c4ba581b-bb21-4be3-939d-cdab3cec3332/volumes" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.062906 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" path="/var/lib/kubelet/pods/ed6b1a35-62b4-492e-86e4-b2b7c15be0b4/volumes" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.075812 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhzl\" (UniqueName: \"kubernetes.io/projected/c4ba581b-bb21-4be3-939d-cdab3cec3332-kube-api-access-qqhzl\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.272006 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Dec 06 08:48:02 crc kubenswrapper[4895]: E1206 08:48:02.273443 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="registry-server" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.273570 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="registry-server" Dec 06 08:48:02 crc kubenswrapper[4895]: E1206 08:48:02.273671 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="extract-utilities" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.273733 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="extract-utilities" Dec 06 08:48:02 crc kubenswrapper[4895]: E1206 08:48:02.273783 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="extract-content" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.273841 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="extract-content" Dec 06 08:48:02 crc kubenswrapper[4895]: E1206 08:48:02.275826 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ba581b-bb21-4be3-939d-cdab3cec3332" containerName="mariadb-client-1" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.275855 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ba581b-bb21-4be3-939d-cdab3cec3332" containerName="mariadb-client-1" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.276162 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ba581b-bb21-4be3-939d-cdab3cec3332" containerName="mariadb-client-1" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.276177 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6b1a35-62b4-492e-86e4-b2b7c15be0b4" containerName="registry-server" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.276840 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.298972 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.380720 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtp62\" (UniqueName: \"kubernetes.io/projected/b04d41aa-3755-4b16-bbae-f75cbba7194a-kube-api-access-dtp62\") pod \"mariadb-client-4-default\" (UID: \"b04d41aa-3755-4b16-bbae-f75cbba7194a\") " pod="openstack/mariadb-client-4-default" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.482079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtp62\" (UniqueName: \"kubernetes.io/projected/b04d41aa-3755-4b16-bbae-f75cbba7194a-kube-api-access-dtp62\") pod \"mariadb-client-4-default\" (UID: \"b04d41aa-3755-4b16-bbae-f75cbba7194a\") " pod="openstack/mariadb-client-4-default" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.507462 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtp62\" (UniqueName: \"kubernetes.io/projected/b04d41aa-3755-4b16-bbae-f75cbba7194a-kube-api-access-dtp62\") pod \"mariadb-client-4-default\" (UID: \"b04d41aa-3755-4b16-bbae-f75cbba7194a\") " pod="openstack/mariadb-client-4-default" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.507973 4895 scope.go:117] "RemoveContainer" containerID="d08959219f7daa763978a785fc65cce9e2d9eaa26072d90f6f94e9f0d7e33ccb" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.508176 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 06 08:48:02 crc kubenswrapper[4895]: I1206 08:48:02.598712 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 06 08:48:03 crc kubenswrapper[4895]: I1206 08:48:03.081678 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 06 08:48:03 crc kubenswrapper[4895]: I1206 08:48:03.521828 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"b04d41aa-3755-4b16-bbae-f75cbba7194a","Type":"ContainerStarted","Data":"746c5e182247b42b3e0e3fa7ca06e11b2e71fe61536da6496c94272252eeb1a5"} Dec 06 08:48:04 crc kubenswrapper[4895]: I1206 08:48:04.534965 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"b04d41aa-3755-4b16-bbae-f75cbba7194a","Type":"ContainerStarted","Data":"81e32af601f8aae1dfc19137fcfbcb3ab3e6e06374f7b28a04b22836a7246ed3"} Dec 06 08:48:04 crc kubenswrapper[4895]: I1206 08:48:04.553043 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-4-default" podStartSLOduration=2.553023478 podStartE2EDuration="2.553023478s" podCreationTimestamp="2025-12-06 08:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:48:04.54788644 +0000 UTC m=+6646.949275300" watchObservedRunningTime="2025-12-06 08:48:04.553023478 +0000 UTC m=+6646.954412348" Dec 06 08:48:04 crc kubenswrapper[4895]: I1206 08:48:04.601600 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_b04d41aa-3755-4b16-bbae-f75cbba7194a/mariadb-client-4-default/0.log" Dec 06 08:48:05 crc kubenswrapper[4895]: I1206 08:48:05.050975 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:48:05 crc kubenswrapper[4895]: E1206 08:48:05.051261 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:48:05 crc kubenswrapper[4895]: I1206 08:48:05.544531 4895 generic.go:334] "Generic (PLEG): container finished" podID="b04d41aa-3755-4b16-bbae-f75cbba7194a" containerID="81e32af601f8aae1dfc19137fcfbcb3ab3e6e06374f7b28a04b22836a7246ed3" exitCode=0 Dec 06 08:48:05 crc kubenswrapper[4895]: I1206 08:48:05.544578 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"b04d41aa-3755-4b16-bbae-f75cbba7194a","Type":"ContainerDied","Data":"81e32af601f8aae1dfc19137fcfbcb3ab3e6e06374f7b28a04b22836a7246ed3"} Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.318313 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.359818 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.368863 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.509301 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtp62\" (UniqueName: \"kubernetes.io/projected/b04d41aa-3755-4b16-bbae-f75cbba7194a-kube-api-access-dtp62\") pod \"b04d41aa-3755-4b16-bbae-f75cbba7194a\" (UID: \"b04d41aa-3755-4b16-bbae-f75cbba7194a\") " Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.515793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04d41aa-3755-4b16-bbae-f75cbba7194a-kube-api-access-dtp62" (OuterVolumeSpecName: "kube-api-access-dtp62") pod "b04d41aa-3755-4b16-bbae-f75cbba7194a" (UID: "b04d41aa-3755-4b16-bbae-f75cbba7194a"). InnerVolumeSpecName "kube-api-access-dtp62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.611246 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtp62\" (UniqueName: \"kubernetes.io/projected/b04d41aa-3755-4b16-bbae-f75cbba7194a-kube-api-access-dtp62\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.652485 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746c5e182247b42b3e0e3fa7ca06e11b2e71fe61536da6496c94272252eeb1a5" Dec 06 08:48:07 crc kubenswrapper[4895]: I1206 08:48:07.652540 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 06 08:48:08 crc kubenswrapper[4895]: I1206 08:48:08.063177 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04d41aa-3755-4b16-bbae-f75cbba7194a" path="/var/lib/kubelet/pods/b04d41aa-3755-4b16-bbae-f75cbba7194a/volumes" Dec 06 08:48:10 crc kubenswrapper[4895]: I1206 08:48:10.917686 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Dec 06 08:48:10 crc kubenswrapper[4895]: E1206 08:48:10.918609 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04d41aa-3755-4b16-bbae-f75cbba7194a" containerName="mariadb-client-4-default" Dec 06 08:48:10 crc kubenswrapper[4895]: I1206 08:48:10.918627 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04d41aa-3755-4b16-bbae-f75cbba7194a" containerName="mariadb-client-4-default" Dec 06 08:48:10 crc kubenswrapper[4895]: I1206 08:48:10.918837 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04d41aa-3755-4b16-bbae-f75cbba7194a" containerName="mariadb-client-4-default" Dec 06 08:48:10 crc kubenswrapper[4895]: I1206 08:48:10.919527 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 06 08:48:10 crc kubenswrapper[4895]: I1206 08:48:10.922833 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-z2v77" Dec 06 08:48:10 crc kubenswrapper[4895]: I1206 08:48:10.926071 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 06 08:48:11 crc kubenswrapper[4895]: I1206 08:48:11.060595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzll\" (UniqueName: \"kubernetes.io/projected/b647541b-d977-48f4-ad7f-9487ebbec1c2-kube-api-access-gfzll\") pod \"mariadb-client-5-default\" (UID: \"b647541b-d977-48f4-ad7f-9487ebbec1c2\") " pod="openstack/mariadb-client-5-default" Dec 06 08:48:11 crc kubenswrapper[4895]: I1206 08:48:11.162601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzll\" (UniqueName: \"kubernetes.io/projected/b647541b-d977-48f4-ad7f-9487ebbec1c2-kube-api-access-gfzll\") pod \"mariadb-client-5-default\" (UID: \"b647541b-d977-48f4-ad7f-9487ebbec1c2\") " pod="openstack/mariadb-client-5-default" Dec 06 08:48:11 crc kubenswrapper[4895]: I1206 08:48:11.194057 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzll\" (UniqueName: \"kubernetes.io/projected/b647541b-d977-48f4-ad7f-9487ebbec1c2-kube-api-access-gfzll\") pod \"mariadb-client-5-default\" (UID: \"b647541b-d977-48f4-ad7f-9487ebbec1c2\") " pod="openstack/mariadb-client-5-default" Dec 06 08:48:11 crc kubenswrapper[4895]: I1206 08:48:11.249038 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 06 08:48:11 crc kubenswrapper[4895]: I1206 08:48:11.777418 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 06 08:48:12 crc kubenswrapper[4895]: I1206 08:48:12.695346 4895 generic.go:334] "Generic (PLEG): container finished" podID="b647541b-d977-48f4-ad7f-9487ebbec1c2" containerID="ff426ecc8f4115ed206c6bc4becf18c94eea9d76763e4517c83a84954e9abb3e" exitCode=0 Dec 06 08:48:12 crc kubenswrapper[4895]: I1206 08:48:12.695409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"b647541b-d977-48f4-ad7f-9487ebbec1c2","Type":"ContainerDied","Data":"ff426ecc8f4115ed206c6bc4becf18c94eea9d76763e4517c83a84954e9abb3e"} Dec 06 08:48:12 crc kubenswrapper[4895]: I1206 08:48:12.695651 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"b647541b-d977-48f4-ad7f-9487ebbec1c2","Type":"ContainerStarted","Data":"9a9e08b66a5eff18b9b10a282dda920f6a9da795a5b0f1ce77e007241bf0fe3e"} Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.062015 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.081415 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_b647541b-d977-48f4-ad7f-9487ebbec1c2/mariadb-client-5-default/0.log" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.109878 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.116173 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.206133 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfzll\" (UniqueName: \"kubernetes.io/projected/b647541b-d977-48f4-ad7f-9487ebbec1c2-kube-api-access-gfzll\") pod \"b647541b-d977-48f4-ad7f-9487ebbec1c2\" (UID: \"b647541b-d977-48f4-ad7f-9487ebbec1c2\") " Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.212300 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b647541b-d977-48f4-ad7f-9487ebbec1c2-kube-api-access-gfzll" (OuterVolumeSpecName: "kube-api-access-gfzll") pod "b647541b-d977-48f4-ad7f-9487ebbec1c2" (UID: "b647541b-d977-48f4-ad7f-9487ebbec1c2"). InnerVolumeSpecName "kube-api-access-gfzll". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.236541 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Dec 06 08:48:14 crc kubenswrapper[4895]: E1206 08:48:14.236966 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b647541b-d977-48f4-ad7f-9487ebbec1c2" containerName="mariadb-client-5-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.236979 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b647541b-d977-48f4-ad7f-9487ebbec1c2" containerName="mariadb-client-5-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.237169 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b647541b-d977-48f4-ad7f-9487ebbec1c2" containerName="mariadb-client-5-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.237712 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.247488 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.307566 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfzll\" (UniqueName: \"kubernetes.io/projected/b647541b-d977-48f4-ad7f-9487ebbec1c2-kube-api-access-gfzll\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.409039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2tn\" (UniqueName: \"kubernetes.io/projected/6b53e66e-a5a2-4459-a91e-2e4e30125d6d-kube-api-access-nb2tn\") pod \"mariadb-client-6-default\" (UID: \"6b53e66e-a5a2-4459-a91e-2e4e30125d6d\") " pod="openstack/mariadb-client-6-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.510581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2tn\" (UniqueName: \"kubernetes.io/projected/6b53e66e-a5a2-4459-a91e-2e4e30125d6d-kube-api-access-nb2tn\") pod \"mariadb-client-6-default\" (UID: \"6b53e66e-a5a2-4459-a91e-2e4e30125d6d\") " pod="openstack/mariadb-client-6-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.543258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2tn\" (UniqueName: \"kubernetes.io/projected/6b53e66e-a5a2-4459-a91e-2e4e30125d6d-kube-api-access-nb2tn\") pod \"mariadb-client-6-default\" (UID: \"6b53e66e-a5a2-4459-a91e-2e4e30125d6d\") " pod="openstack/mariadb-client-6-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.566097 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.721041 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9e08b66a5eff18b9b10a282dda920f6a9da795a5b0f1ce77e007241bf0fe3e" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.721152 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 06 08:48:14 crc kubenswrapper[4895]: I1206 08:48:14.944950 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 06 08:48:15 crc kubenswrapper[4895]: I1206 08:48:15.729712 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"6b53e66e-a5a2-4459-a91e-2e4e30125d6d","Type":"ContainerStarted","Data":"20eafc3d74934382445e5e9ca1fd9cba513ffa4b9a7710316b510e807415e671"} Dec 06 08:48:15 crc kubenswrapper[4895]: I1206 08:48:15.730121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"6b53e66e-a5a2-4459-a91e-2e4e30125d6d","Type":"ContainerStarted","Data":"5ed2803742b16fd9bdb78a255178fbeb286776beaf93d28ac600189864168e13"} Dec 06 08:48:15 crc kubenswrapper[4895]: I1206 08:48:15.760413 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.7603883489999999 podStartE2EDuration="1.760388349s" podCreationTimestamp="2025-12-06 08:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:48:15.753064473 +0000 UTC m=+6658.154453373" watchObservedRunningTime="2025-12-06 08:48:15.760388349 +0000 UTC m=+6658.161777219" Dec 06 08:48:16 crc kubenswrapper[4895]: I1206 08:48:16.065034 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b647541b-d977-48f4-ad7f-9487ebbec1c2" path="/var/lib/kubelet/pods/b647541b-d977-48f4-ad7f-9487ebbec1c2/volumes" Dec 06 08:48:16 crc kubenswrapper[4895]: I1206 08:48:16.741553 4895 generic.go:334] "Generic (PLEG): container finished" podID="6b53e66e-a5a2-4459-a91e-2e4e30125d6d" containerID="20eafc3d74934382445e5e9ca1fd9cba513ffa4b9a7710316b510e807415e671" exitCode=1 Dec 06 08:48:16 crc kubenswrapper[4895]: I1206 08:48:16.741637 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"6b53e66e-a5a2-4459-a91e-2e4e30125d6d","Type":"ContainerDied","Data":"20eafc3d74934382445e5e9ca1fd9cba513ffa4b9a7710316b510e807415e671"} Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.057563 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:48:18 crc kubenswrapper[4895]: E1206 08:48:18.058335 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.097690 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.143815 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.145529 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.177388 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb2tn\" (UniqueName: \"kubernetes.io/projected/6b53e66e-a5a2-4459-a91e-2e4e30125d6d-kube-api-access-nb2tn\") pod \"6b53e66e-a5a2-4459-a91e-2e4e30125d6d\" (UID: \"6b53e66e-a5a2-4459-a91e-2e4e30125d6d\") " Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.184699 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b53e66e-a5a2-4459-a91e-2e4e30125d6d-kube-api-access-nb2tn" (OuterVolumeSpecName: "kube-api-access-nb2tn") pod "6b53e66e-a5a2-4459-a91e-2e4e30125d6d" (UID: "6b53e66e-a5a2-4459-a91e-2e4e30125d6d"). InnerVolumeSpecName "kube-api-access-nb2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.275072 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Dec 06 08:48:18 crc kubenswrapper[4895]: E1206 08:48:18.275429 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b53e66e-a5a2-4459-a91e-2e4e30125d6d" containerName="mariadb-client-6-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.275452 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b53e66e-a5a2-4459-a91e-2e4e30125d6d" containerName="mariadb-client-6-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.275646 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b53e66e-a5a2-4459-a91e-2e4e30125d6d" containerName="mariadb-client-6-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.276300 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.279269 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb2tn\" (UniqueName: \"kubernetes.io/projected/6b53e66e-a5a2-4459-a91e-2e4e30125d6d-kube-api-access-nb2tn\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.285731 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.380946 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74rll\" (UniqueName: \"kubernetes.io/projected/083cf172-c662-4a59-ae9c-7f2c2b4d3515-kube-api-access-74rll\") pod \"mariadb-client-7-default\" (UID: \"083cf172-c662-4a59-ae9c-7f2c2b4d3515\") " pod="openstack/mariadb-client-7-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.481847 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74rll\" (UniqueName: \"kubernetes.io/projected/083cf172-c662-4a59-ae9c-7f2c2b4d3515-kube-api-access-74rll\") pod \"mariadb-client-7-default\" (UID: \"083cf172-c662-4a59-ae9c-7f2c2b4d3515\") " pod="openstack/mariadb-client-7-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.500264 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74rll\" (UniqueName: \"kubernetes.io/projected/083cf172-c662-4a59-ae9c-7f2c2b4d3515-kube-api-access-74rll\") pod \"mariadb-client-7-default\" (UID: \"083cf172-c662-4a59-ae9c-7f2c2b4d3515\") " pod="openstack/mariadb-client-7-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.603101 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.766720 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ed2803742b16fd9bdb78a255178fbeb286776beaf93d28ac600189864168e13" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.767069 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 06 08:48:18 crc kubenswrapper[4895]: I1206 08:48:18.908228 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 06 08:48:18 crc kubenswrapper[4895]: W1206 08:48:18.908957 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod083cf172_c662_4a59_ae9c_7f2c2b4d3515.slice/crio-c36192125532918a50ee7a1ffbf8929fa8ff1cddc4f9b50fb59341100fe255b5 WatchSource:0}: Error finding container c36192125532918a50ee7a1ffbf8929fa8ff1cddc4f9b50fb59341100fe255b5: Status 404 returned error can't find the container with id c36192125532918a50ee7a1ffbf8929fa8ff1cddc4f9b50fb59341100fe255b5 Dec 06 08:48:19 crc kubenswrapper[4895]: I1206 08:48:19.775521 4895 generic.go:334] "Generic (PLEG): container finished" podID="083cf172-c662-4a59-ae9c-7f2c2b4d3515" containerID="2c9c4316a2338808531dad197d2de3a7a8a9fd218b89182843bf9b1c76a12eea" exitCode=0 Dec 06 08:48:19 crc kubenswrapper[4895]: I1206 08:48:19.775602 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"083cf172-c662-4a59-ae9c-7f2c2b4d3515","Type":"ContainerDied","Data":"2c9c4316a2338808531dad197d2de3a7a8a9fd218b89182843bf9b1c76a12eea"} Dec 06 08:48:19 crc kubenswrapper[4895]: I1206 08:48:19.775681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"083cf172-c662-4a59-ae9c-7f2c2b4d3515","Type":"ContainerStarted","Data":"c36192125532918a50ee7a1ffbf8929fa8ff1cddc4f9b50fb59341100fe255b5"} Dec 06 08:48:20 crc kubenswrapper[4895]: I1206 08:48:20.072113 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b53e66e-a5a2-4459-a91e-2e4e30125d6d" path="/var/lib/kubelet/pods/6b53e66e-a5a2-4459-a91e-2e4e30125d6d/volumes" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.158196 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.179457 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_083cf172-c662-4a59-ae9c-7f2c2b4d3515/mariadb-client-7-default/0.log" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.211339 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.218231 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.230258 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74rll\" (UniqueName: \"kubernetes.io/projected/083cf172-c662-4a59-ae9c-7f2c2b4d3515-kube-api-access-74rll\") pod \"083cf172-c662-4a59-ae9c-7f2c2b4d3515\" (UID: \"083cf172-c662-4a59-ae9c-7f2c2b4d3515\") " Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.234381 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083cf172-c662-4a59-ae9c-7f2c2b4d3515-kube-api-access-74rll" (OuterVolumeSpecName: "kube-api-access-74rll") pod "083cf172-c662-4a59-ae9c-7f2c2b4d3515" (UID: "083cf172-c662-4a59-ae9c-7f2c2b4d3515"). InnerVolumeSpecName "kube-api-access-74rll". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.331730 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74rll\" (UniqueName: \"kubernetes.io/projected/083cf172-c662-4a59-ae9c-7f2c2b4d3515-kube-api-access-74rll\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.399528 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Dec 06 08:48:21 crc kubenswrapper[4895]: E1206 08:48:21.400032 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083cf172-c662-4a59-ae9c-7f2c2b4d3515" containerName="mariadb-client-7-default" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.400519 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="083cf172-c662-4a59-ae9c-7f2c2b4d3515" containerName="mariadb-client-7-default" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.400706 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="083cf172-c662-4a59-ae9c-7f2c2b4d3515" containerName="mariadb-client-7-default" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.401245 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.409106 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.432809 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcncb\" (UniqueName: \"kubernetes.io/projected/b886101b-1b5c-4e47-9d3d-506b48a1f498-kube-api-access-rcncb\") pod \"mariadb-client-2\" (UID: \"b886101b-1b5c-4e47-9d3d-506b48a1f498\") " pod="openstack/mariadb-client-2" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.534088 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcncb\" (UniqueName: \"kubernetes.io/projected/b886101b-1b5c-4e47-9d3d-506b48a1f498-kube-api-access-rcncb\") pod \"mariadb-client-2\" (UID: \"b886101b-1b5c-4e47-9d3d-506b48a1f498\") " pod="openstack/mariadb-client-2" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.549855 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcncb\" (UniqueName: \"kubernetes.io/projected/b886101b-1b5c-4e47-9d3d-506b48a1f498-kube-api-access-rcncb\") pod \"mariadb-client-2\" (UID: \"b886101b-1b5c-4e47-9d3d-506b48a1f498\") " pod="openstack/mariadb-client-2" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.734698 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.792059 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c36192125532918a50ee7a1ffbf8929fa8ff1cddc4f9b50fb59341100fe255b5" Dec 06 08:48:21 crc kubenswrapper[4895]: I1206 08:48:21.792163 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 06 08:48:22 crc kubenswrapper[4895]: I1206 08:48:22.069150 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083cf172-c662-4a59-ae9c-7f2c2b4d3515" path="/var/lib/kubelet/pods/083cf172-c662-4a59-ae9c-7f2c2b4d3515/volumes" Dec 06 08:48:22 crc kubenswrapper[4895]: I1206 08:48:22.147017 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 06 08:48:22 crc kubenswrapper[4895]: I1206 08:48:22.803257 4895 generic.go:334] "Generic (PLEG): container finished" podID="b886101b-1b5c-4e47-9d3d-506b48a1f498" containerID="4ca45f5f164c354c1437a2c377b4c304eec86072b718004dd655677a86812002" exitCode=0 Dec 06 08:48:22 crc kubenswrapper[4895]: I1206 08:48:22.803341 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"b886101b-1b5c-4e47-9d3d-506b48a1f498","Type":"ContainerDied","Data":"4ca45f5f164c354c1437a2c377b4c304eec86072b718004dd655677a86812002"} Dec 06 08:48:22 crc kubenswrapper[4895]: I1206 08:48:22.803428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"b886101b-1b5c-4e47-9d3d-506b48a1f498","Type":"ContainerStarted","Data":"ff5c46179c8d9c561d19cd4616164ceb8caf1d4657e72a732e19d1746594ec95"} Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.160885 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.176996 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_b886101b-1b5c-4e47-9d3d-506b48a1f498/mariadb-client-2/0.log" Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.201360 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.209841 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.285502 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcncb\" (UniqueName: \"kubernetes.io/projected/b886101b-1b5c-4e47-9d3d-506b48a1f498-kube-api-access-rcncb\") pod \"b886101b-1b5c-4e47-9d3d-506b48a1f498\" (UID: \"b886101b-1b5c-4e47-9d3d-506b48a1f498\") " Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.290614 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b886101b-1b5c-4e47-9d3d-506b48a1f498-kube-api-access-rcncb" (OuterVolumeSpecName: "kube-api-access-rcncb") pod "b886101b-1b5c-4e47-9d3d-506b48a1f498" (UID: "b886101b-1b5c-4e47-9d3d-506b48a1f498"). InnerVolumeSpecName "kube-api-access-rcncb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.387090 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcncb\" (UniqueName: \"kubernetes.io/projected/b886101b-1b5c-4e47-9d3d-506b48a1f498-kube-api-access-rcncb\") on node \"crc\" DevicePath \"\"" Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.822279 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5c46179c8d9c561d19cd4616164ceb8caf1d4657e72a732e19d1746594ec95" Dec 06 08:48:24 crc kubenswrapper[4895]: I1206 08:48:24.822359 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 06 08:48:26 crc kubenswrapper[4895]: I1206 08:48:26.065618 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b886101b-1b5c-4e47-9d3d-506b48a1f498" path="/var/lib/kubelet/pods/b886101b-1b5c-4e47-9d3d-506b48a1f498/volumes" Dec 06 08:48:30 crc kubenswrapper[4895]: I1206 08:48:30.051231 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:48:30 crc kubenswrapper[4895]: E1206 08:48:30.052459 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:48:44 crc kubenswrapper[4895]: I1206 08:48:44.052199 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:48:44 crc kubenswrapper[4895]: E1206 08:48:44.053008 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:48:57 crc kubenswrapper[4895]: I1206 08:48:57.050573 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:48:57 crc kubenswrapper[4895]: E1206 08:48:57.051287 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:49:11 crc kubenswrapper[4895]: I1206 08:49:11.050068 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:49:11 crc kubenswrapper[4895]: E1206 08:49:11.050851 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:49:14 crc kubenswrapper[4895]: I1206 08:49:14.312088 4895 scope.go:117] "RemoveContainer" containerID="7088bbd0c9484e70e8e87619c543636fd8fa8b73318558718ee148aa91c4b08a" Dec 06 08:49:24 crc kubenswrapper[4895]: I1206 08:49:24.050602 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:49:24 crc kubenswrapper[4895]: E1206 08:49:24.051249 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:49:36 crc kubenswrapper[4895]: I1206 08:49:36.050974 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:49:36 crc kubenswrapper[4895]: E1206 08:49:36.051853 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:49:51 crc kubenswrapper[4895]: I1206 08:49:51.050913 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:49:51 crc kubenswrapper[4895]: E1206 08:49:51.051550 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:50:02 crc kubenswrapper[4895]: I1206 08:50:02.051007 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:50:02 crc kubenswrapper[4895]: E1206 08:50:02.051663 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:50:14 crc kubenswrapper[4895]: I1206 08:50:14.051368 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:50:14 crc kubenswrapper[4895]: E1206 08:50:14.053016 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:50:29 crc kubenswrapper[4895]: I1206 08:50:29.051460 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:50:29 crc kubenswrapper[4895]: E1206 08:50:29.053353 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:50:44 crc kubenswrapper[4895]: I1206 08:50:44.051818 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:50:44 crc kubenswrapper[4895]: E1206 08:50:44.052779 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:50:58 crc kubenswrapper[4895]: I1206 08:50:58.056931 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:50:58 crc kubenswrapper[4895]: E1206 08:50:58.060224 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:51:11 crc kubenswrapper[4895]: I1206 08:51:11.050784 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:51:11 crc kubenswrapper[4895]: E1206 08:51:11.051435 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:51:25 crc kubenswrapper[4895]: I1206 08:51:25.051850 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:51:25 crc kubenswrapper[4895]: E1206 08:51:25.054583 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:51:37 crc kubenswrapper[4895]: I1206 08:51:37.051259 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:51:37 crc kubenswrapper[4895]: I1206 08:51:37.607676 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"4f47c8fbdf2a09a4c030ceeb8af41e19bb9c9bbd34ca05a32fb364cd0a5c3936"} Dec 06 08:53:59 crc kubenswrapper[4895]: I1206 08:53:59.695665 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:53:59 crc kubenswrapper[4895]: I1206 08:53:59.696374 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.466418 4895 scope.go:117] "RemoveContainer" containerID="81e32af601f8aae1dfc19137fcfbcb3ab3e6e06374f7b28a04b22836a7246ed3" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.494085 4895 scope.go:117] "RemoveContainer" containerID="4fb7e6fc00a474c05f0e25c04d5db576d8adab024271f36757976e73c09b8d07" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.514780 4895 scope.go:117] "RemoveContainer" containerID="2b68e19433c04b0b39656404c7be8d70d7c49f6356a53ad19e5d7b035e9c4de8" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.541990 4895 scope.go:117] "RemoveContainer" containerID="ff426ecc8f4115ed206c6bc4becf18c94eea9d76763e4517c83a84954e9abb3e" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.571910 4895 scope.go:117] "RemoveContainer" containerID="93fffec0fa2791358b129262c8587ded277faec0f7e3088366311d73e57f0966" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.601942 4895 scope.go:117] "RemoveContainer" containerID="e0ef9994d87d9e2f77d04d5ef9265a6b8bf69458f02bbeeb7e0573efda8e5b2d" Dec 06 08:54:14 crc kubenswrapper[4895]: I1206 08:54:14.632344 4895 scope.go:117] "RemoveContainer" containerID="3024bc9e7a5dc97d935f92ae5102e4f135445cc55c52270e98f1de8aec9d6b1f" Dec 06 08:54:29 crc kubenswrapper[4895]: I1206 08:54:29.696236 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:54:29 crc kubenswrapper[4895]: I1206 08:54:29.696894 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:54:59 crc kubenswrapper[4895]: I1206 08:54:59.695794 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:54:59 crc kubenswrapper[4895]: I1206 08:54:59.696534 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:54:59 crc kubenswrapper[4895]: I1206 08:54:59.696648 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:54:59 crc kubenswrapper[4895]: I1206 08:54:59.697761 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f47c8fbdf2a09a4c030ceeb8af41e19bb9c9bbd34ca05a32fb364cd0a5c3936"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:54:59 crc kubenswrapper[4895]: I1206 08:54:59.697869 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://4f47c8fbdf2a09a4c030ceeb8af41e19bb9c9bbd34ca05a32fb364cd0a5c3936" gracePeriod=600 Dec 06 08:55:00 crc kubenswrapper[4895]: I1206 08:55:00.448305 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="4f47c8fbdf2a09a4c030ceeb8af41e19bb9c9bbd34ca05a32fb364cd0a5c3936" exitCode=0 Dec 06 08:55:00 crc kubenswrapper[4895]: I1206 08:55:00.448375 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"4f47c8fbdf2a09a4c030ceeb8af41e19bb9c9bbd34ca05a32fb364cd0a5c3936"} Dec 06 08:55:00 crc kubenswrapper[4895]: I1206 08:55:00.448717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e"} Dec 06 08:55:00 crc kubenswrapper[4895]: I1206 08:55:00.448746 4895 scope.go:117] "RemoveContainer" containerID="0dd94c5cfc7a3a90ddd52e7ff4904ea358ec60b5e330d1b0e55285e53a550684" Dec 06 08:55:14 crc kubenswrapper[4895]: I1206 08:55:14.726593 4895 scope.go:117] "RemoveContainer" containerID="2c9c4316a2338808531dad197d2de3a7a8a9fd218b89182843bf9b1c76a12eea" Dec 06 08:55:14 crc kubenswrapper[4895]: I1206 08:55:14.750742 4895 scope.go:117] "RemoveContainer" containerID="4ca45f5f164c354c1437a2c377b4c304eec86072b718004dd655677a86812002" Dec 06 08:55:14 crc kubenswrapper[4895]: I1206 08:55:14.791077 4895 scope.go:117] "RemoveContainer" containerID="20eafc3d74934382445e5e9ca1fd9cba513ffa4b9a7710316b510e807415e671" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.209637 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lmxp7"] Dec 06 08:57:17 crc kubenswrapper[4895]: E1206 08:57:17.211307 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b886101b-1b5c-4e47-9d3d-506b48a1f498" containerName="mariadb-client-2" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.211324 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b886101b-1b5c-4e47-9d3d-506b48a1f498" containerName="mariadb-client-2" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.211557 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b886101b-1b5c-4e47-9d3d-506b48a1f498" containerName="mariadb-client-2" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.212852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.236857 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmxp7"] Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.361973 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6txg\" (UniqueName: \"kubernetes.io/projected/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-kube-api-access-g6txg\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.362101 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-catalog-content\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.362148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-utilities\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.463491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-catalog-content\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.463584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-utilities\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.463650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6txg\" (UniqueName: \"kubernetes.io/projected/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-kube-api-access-g6txg\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.464040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-catalog-content\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.464149 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-utilities\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.492682 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6txg\" (UniqueName: \"kubernetes.io/projected/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-kube-api-access-g6txg\") pod \"community-operators-lmxp7\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:17 crc kubenswrapper[4895]: I1206 08:57:17.549606 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:18 crc kubenswrapper[4895]: I1206 08:57:18.012857 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmxp7"] Dec 06 08:57:18 crc kubenswrapper[4895]: I1206 08:57:18.696426 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerID="545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443" exitCode=0 Dec 06 08:57:18 crc kubenswrapper[4895]: I1206 08:57:18.696674 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerDied","Data":"545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443"} Dec 06 08:57:18 crc kubenswrapper[4895]: I1206 08:57:18.696739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerStarted","Data":"9d1c47c10b5dc1ac5ccaab717dc09f3e7bea48105119db56f9d8b46b253c9296"} Dec 06 08:57:18 crc kubenswrapper[4895]: I1206 08:57:18.698790 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:57:19 crc kubenswrapper[4895]: I1206 08:57:19.709325 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerStarted","Data":"ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589"} Dec 06 08:57:20 crc kubenswrapper[4895]: I1206 08:57:20.720873 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerID="ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589" exitCode=0 Dec 06 08:57:20 crc kubenswrapper[4895]: I1206 08:57:20.720925 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerDied","Data":"ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589"} Dec 06 08:57:21 crc kubenswrapper[4895]: I1206 08:57:21.736881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerStarted","Data":"ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903"} Dec 06 08:57:21 crc kubenswrapper[4895]: I1206 08:57:21.753037 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lmxp7" podStartSLOduration=2.34557801 podStartE2EDuration="4.753002224s" podCreationTimestamp="2025-12-06 08:57:17 +0000 UTC" firstStartedPulling="2025-12-06 08:57:18.698324653 +0000 UTC m=+7201.099713543" lastFinishedPulling="2025-12-06 08:57:21.105748887 +0000 UTC m=+7203.507137757" observedRunningTime="2025-12-06 08:57:21.751919704 +0000 UTC m=+7204.153308584" watchObservedRunningTime="2025-12-06 08:57:21.753002224 +0000 UTC m=+7204.154391104" Dec 06 08:57:27 crc kubenswrapper[4895]: I1206 08:57:27.550916 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:27 crc kubenswrapper[4895]: I1206 08:57:27.551760 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:27 crc kubenswrapper[4895]: I1206 08:57:27.613821 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:27 crc kubenswrapper[4895]: I1206 08:57:27.858857 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:27 crc kubenswrapper[4895]: I1206 08:57:27.909339 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmxp7"] Dec 06 08:57:29 crc kubenswrapper[4895]: I1206 08:57:29.696410 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:57:29 crc kubenswrapper[4895]: I1206 08:57:29.696505 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:57:29 crc kubenswrapper[4895]: I1206 08:57:29.812462 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lmxp7" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="registry-server" containerID="cri-o://ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903" gracePeriod=2 Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.310006 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.476396 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-utilities\") pod \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.476460 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-catalog-content\") pod \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.476590 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6txg\" (UniqueName: \"kubernetes.io/projected/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-kube-api-access-g6txg\") pod \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\" (UID: \"b5cbc9b8-930c-49b2-adc3-9f23e188daaa\") " Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.478015 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-utilities" (OuterVolumeSpecName: "utilities") pod "b5cbc9b8-930c-49b2-adc3-9f23e188daaa" (UID: "b5cbc9b8-930c-49b2-adc3-9f23e188daaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.484984 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-kube-api-access-g6txg" (OuterVolumeSpecName: "kube-api-access-g6txg") pod "b5cbc9b8-930c-49b2-adc3-9f23e188daaa" (UID: "b5cbc9b8-930c-49b2-adc3-9f23e188daaa"). InnerVolumeSpecName "kube-api-access-g6txg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.552567 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5cbc9b8-930c-49b2-adc3-9f23e188daaa" (UID: "b5cbc9b8-930c-49b2-adc3-9f23e188daaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.579063 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6txg\" (UniqueName: \"kubernetes.io/projected/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-kube-api-access-g6txg\") on node \"crc\" DevicePath \"\"" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.579113 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.579131 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc9b8-930c-49b2-adc3-9f23e188daaa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.822344 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerID="ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903" exitCode=0 Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.822428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerDied","Data":"ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903"} Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.826800 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxp7" event={"ID":"b5cbc9b8-930c-49b2-adc3-9f23e188daaa","Type":"ContainerDied","Data":"9d1c47c10b5dc1ac5ccaab717dc09f3e7bea48105119db56f9d8b46b253c9296"} Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.822558 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxp7" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.826839 4895 scope.go:117] "RemoveContainer" containerID="ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.867725 4895 scope.go:117] "RemoveContainer" containerID="ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.872252 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmxp7"] Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.880987 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lmxp7"] Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.887449 4895 scope.go:117] "RemoveContainer" containerID="545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.940913 4895 scope.go:117] "RemoveContainer" containerID="ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903" Dec 06 08:57:30 crc kubenswrapper[4895]: E1206 08:57:30.941420 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903\": container with ID starting with ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903 not found: ID does not exist" containerID="ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.941539 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903"} err="failed to get container status \"ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903\": rpc error: code = NotFound desc = could not find container \"ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903\": container with ID starting with ebcf4aa221b17084ceaa19a784011d0d81d4e2d408ffa1c10a3830b54c329903 not found: ID does not exist" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.941602 4895 scope.go:117] "RemoveContainer" containerID="ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589" Dec 06 08:57:30 crc kubenswrapper[4895]: E1206 08:57:30.941908 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589\": container with ID starting with ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589 not found: ID does not exist" containerID="ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.941949 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589"} err="failed to get container status \"ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589\": rpc error: code = NotFound desc = could not find container \"ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589\": container with ID starting with ccee4a9811a57baa0e81f0613e212ec1490772eae2743bb3e264bb46dd48a589 not found: ID does not exist" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.941967 4895 scope.go:117] "RemoveContainer" containerID="545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443" Dec 06 08:57:30 crc kubenswrapper[4895]: E1206 08:57:30.942427 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443\": container with ID starting with 545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443 not found: ID does not exist" containerID="545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443" Dec 06 08:57:30 crc kubenswrapper[4895]: I1206 08:57:30.942485 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443"} err="failed to get container status \"545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443\": rpc error: code = NotFound desc = could not find container \"545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443\": container with ID starting with 545e85d08ce6f4e985426e5cb8db40cc9a9fde313f4099368403ed157c482443 not found: ID does not exist" Dec 06 08:57:32 crc kubenswrapper[4895]: I1206 08:57:32.061364 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" path="/var/lib/kubelet/pods/b5cbc9b8-930c-49b2-adc3-9f23e188daaa/volumes" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.227122 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2srhx"] Dec 06 08:57:39 crc kubenswrapper[4895]: E1206 08:57:39.230134 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="extract-content" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.230156 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="extract-content" Dec 06 08:57:39 crc kubenswrapper[4895]: E1206 08:57:39.230190 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="extract-utilities" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.230199 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="extract-utilities" Dec 06 08:57:39 crc kubenswrapper[4895]: E1206 08:57:39.230220 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="registry-server" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.230228 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="registry-server" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.230448 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5cbc9b8-930c-49b2-adc3-9f23e188daaa" containerName="registry-server" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.231894 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.247009 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2srhx"] Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.413526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-catalog-content\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.413903 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-utilities\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.414293 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th96k\" (UniqueName: \"kubernetes.io/projected/763d50d1-394b-4e2c-8015-d3edfb8c6e96-kube-api-access-th96k\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.515628 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-catalog-content\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.515686 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-utilities\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.515757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th96k\" (UniqueName: \"kubernetes.io/projected/763d50d1-394b-4e2c-8015-d3edfb8c6e96-kube-api-access-th96k\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.516179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-catalog-content\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.516234 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-utilities\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.563869 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th96k\" (UniqueName: \"kubernetes.io/projected/763d50d1-394b-4e2c-8015-d3edfb8c6e96-kube-api-access-th96k\") pod \"redhat-operators-2srhx\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:39 crc kubenswrapper[4895]: I1206 08:57:39.855530 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:40 crc kubenswrapper[4895]: I1206 08:57:40.083637 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2srhx"] Dec 06 08:57:40 crc kubenswrapper[4895]: I1206 08:57:40.915157 4895 generic.go:334] "Generic (PLEG): container finished" podID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerID="8db875b5971018daf9845ab2b533639e69faa3152e086985be893c5f50c44d27" exitCode=0 Dec 06 08:57:40 crc kubenswrapper[4895]: I1206 08:57:40.915239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerDied","Data":"8db875b5971018daf9845ab2b533639e69faa3152e086985be893c5f50c44d27"} Dec 06 08:57:40 crc kubenswrapper[4895]: I1206 08:57:40.915496 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerStarted","Data":"b72b9978567c185d95d8041bc708ddd83dbf1a35c8b117184e5fed41ca27eb03"} Dec 06 08:57:41 crc kubenswrapper[4895]: I1206 08:57:41.928996 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerStarted","Data":"bcdda53a7eca4ece6e3543e7d0f7d5207bae867f44dacc5c45852c87b77ea3bc"} Dec 06 08:57:42 crc kubenswrapper[4895]: I1206 08:57:42.944924 4895 generic.go:334] "Generic (PLEG): container finished" podID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerID="bcdda53a7eca4ece6e3543e7d0f7d5207bae867f44dacc5c45852c87b77ea3bc" exitCode=0 Dec 06 08:57:42 crc kubenswrapper[4895]: I1206 08:57:42.944989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerDied","Data":"bcdda53a7eca4ece6e3543e7d0f7d5207bae867f44dacc5c45852c87b77ea3bc"} Dec 06 08:57:43 crc kubenswrapper[4895]: I1206 08:57:43.960624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerStarted","Data":"7ca47d46dbe4c29b8fb7df307baab144c7f965d5fe5dd043866fcedf0e4a52fe"} Dec 06 08:57:44 crc kubenswrapper[4895]: I1206 08:57:44.001575 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2srhx" podStartSLOduration=2.548783303 podStartE2EDuration="5.001544594s" podCreationTimestamp="2025-12-06 08:57:39 +0000 UTC" firstStartedPulling="2025-12-06 08:57:40.917208848 +0000 UTC m=+7223.318597718" lastFinishedPulling="2025-12-06 08:57:43.369970149 +0000 UTC m=+7225.771359009" observedRunningTime="2025-12-06 08:57:43.989564792 +0000 UTC m=+7226.390953702" watchObservedRunningTime="2025-12-06 08:57:44.001544594 +0000 UTC m=+7226.402933534" Dec 06 08:57:49 crc kubenswrapper[4895]: I1206 08:57:49.855801 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:49 crc kubenswrapper[4895]: I1206 08:57:49.856620 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:49 crc kubenswrapper[4895]: I1206 08:57:49.913117 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:50 crc kubenswrapper[4895]: I1206 08:57:50.060719 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:50 crc kubenswrapper[4895]: I1206 08:57:50.143127 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2srhx"] Dec 06 08:57:52 crc kubenswrapper[4895]: I1206 08:57:52.022333 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2srhx" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="registry-server" containerID="cri-o://7ca47d46dbe4c29b8fb7df307baab144c7f965d5fe5dd043866fcedf0e4a52fe" gracePeriod=2 Dec 06 08:57:54 crc kubenswrapper[4895]: I1206 08:57:54.038970 4895 generic.go:334] "Generic (PLEG): container finished" podID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerID="7ca47d46dbe4c29b8fb7df307baab144c7f965d5fe5dd043866fcedf0e4a52fe" exitCode=0 Dec 06 08:57:54 crc kubenswrapper[4895]: I1206 08:57:54.039050 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerDied","Data":"7ca47d46dbe4c29b8fb7df307baab144c7f965d5fe5dd043866fcedf0e4a52fe"} Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.051293 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2srhx" event={"ID":"763d50d1-394b-4e2c-8015-d3edfb8c6e96","Type":"ContainerDied","Data":"b72b9978567c185d95d8041bc708ddd83dbf1a35c8b117184e5fed41ca27eb03"} Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.051547 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72b9978567c185d95d8041bc708ddd83dbf1a35c8b117184e5fed41ca27eb03" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.102911 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.269631 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-catalog-content\") pod \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.269846 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th96k\" (UniqueName: \"kubernetes.io/projected/763d50d1-394b-4e2c-8015-d3edfb8c6e96-kube-api-access-th96k\") pod \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.269946 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-utilities\") pod \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\" (UID: \"763d50d1-394b-4e2c-8015-d3edfb8c6e96\") " Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.272061 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-utilities" (OuterVolumeSpecName: "utilities") pod "763d50d1-394b-4e2c-8015-d3edfb8c6e96" (UID: "763d50d1-394b-4e2c-8015-d3edfb8c6e96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.278707 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763d50d1-394b-4e2c-8015-d3edfb8c6e96-kube-api-access-th96k" (OuterVolumeSpecName: "kube-api-access-th96k") pod "763d50d1-394b-4e2c-8015-d3edfb8c6e96" (UID: "763d50d1-394b-4e2c-8015-d3edfb8c6e96"). InnerVolumeSpecName "kube-api-access-th96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.371572 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th96k\" (UniqueName: \"kubernetes.io/projected/763d50d1-394b-4e2c-8015-d3edfb8c6e96-kube-api-access-th96k\") on node \"crc\" DevicePath \"\"" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.371615 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.404516 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "763d50d1-394b-4e2c-8015-d3edfb8c6e96" (UID: "763d50d1-394b-4e2c-8015-d3edfb8c6e96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:57:55 crc kubenswrapper[4895]: I1206 08:57:55.473335 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/763d50d1-394b-4e2c-8015-d3edfb8c6e96-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.058716 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2srhx" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.093515 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2srhx"] Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.099114 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2srhx"] Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.908396 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxjq7"] Dec 06 08:57:56 crc kubenswrapper[4895]: E1206 08:57:56.909143 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="extract-content" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.909164 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="extract-content" Dec 06 08:57:56 crc kubenswrapper[4895]: E1206 08:57:56.909180 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="extract-utilities" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.909187 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="extract-utilities" Dec 06 08:57:56 crc kubenswrapper[4895]: E1206 08:57:56.909210 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="registry-server" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.909218 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="registry-server" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.909410 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" containerName="registry-server" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.910765 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:56 crc kubenswrapper[4895]: I1206 08:57:56.927552 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxjq7"] Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.096736 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-catalog-content\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.096783 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-utilities\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.096849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fp5c\" (UniqueName: \"kubernetes.io/projected/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-kube-api-access-7fp5c\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.198489 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fp5c\" (UniqueName: \"kubernetes.io/projected/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-kube-api-access-7fp5c\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.198664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-catalog-content\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.198685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-utilities\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.199162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-catalog-content\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.199214 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-utilities\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.223345 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fp5c\" (UniqueName: \"kubernetes.io/projected/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-kube-api-access-7fp5c\") pod \"redhat-marketplace-sxjq7\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.229293 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:57:57 crc kubenswrapper[4895]: I1206 08:57:57.712204 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxjq7"] Dec 06 08:57:58 crc kubenswrapper[4895]: I1206 08:57:58.059635 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763d50d1-394b-4e2c-8015-d3edfb8c6e96" path="/var/lib/kubelet/pods/763d50d1-394b-4e2c-8015-d3edfb8c6e96/volumes" Dec 06 08:57:58 crc kubenswrapper[4895]: I1206 08:57:58.072422 4895 generic.go:334] "Generic (PLEG): container finished" podID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerID="14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6" exitCode=0 Dec 06 08:57:58 crc kubenswrapper[4895]: I1206 08:57:58.072467 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxjq7" event={"ID":"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f","Type":"ContainerDied","Data":"14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6"} Dec 06 08:57:58 crc kubenswrapper[4895]: I1206 08:57:58.072515 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxjq7" event={"ID":"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f","Type":"ContainerStarted","Data":"4d4a1fbdb19ed4a03692db2eae195eacb468f78b91ee5eb831411b7349cf633b"} Dec 06 08:57:59 crc kubenswrapper[4895]: I1206 08:57:59.081989 4895 generic.go:334] "Generic (PLEG): container finished" podID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerID="c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b" exitCode=0 Dec 06 08:57:59 crc kubenswrapper[4895]: I1206 08:57:59.082108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxjq7" event={"ID":"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f","Type":"ContainerDied","Data":"c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b"} Dec 06 08:57:59 crc kubenswrapper[4895]: I1206 08:57:59.700097 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:57:59 crc kubenswrapper[4895]: I1206 08:57:59.700507 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:58:00 crc kubenswrapper[4895]: I1206 08:58:00.091801 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxjq7" event={"ID":"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f","Type":"ContainerStarted","Data":"7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8"} Dec 06 08:58:00 crc kubenswrapper[4895]: I1206 08:58:00.111091 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxjq7" podStartSLOduration=2.40540936 podStartE2EDuration="4.111072017s" podCreationTimestamp="2025-12-06 08:57:56 +0000 UTC" firstStartedPulling="2025-12-06 08:57:58.074104667 +0000 UTC m=+7240.475493537" lastFinishedPulling="2025-12-06 08:57:59.779767324 +0000 UTC m=+7242.181156194" observedRunningTime="2025-12-06 08:58:00.109090384 +0000 UTC m=+7242.510479274" watchObservedRunningTime="2025-12-06 08:58:00.111072017 +0000 UTC m=+7242.512460887" Dec 06 08:58:07 crc kubenswrapper[4895]: I1206 08:58:07.229506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:58:07 crc kubenswrapper[4895]: I1206 08:58:07.230006 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:58:07 crc kubenswrapper[4895]: I1206 08:58:07.270903 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:58:08 crc kubenswrapper[4895]: I1206 08:58:08.212865 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:58:08 crc kubenswrapper[4895]: I1206 08:58:08.262059 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxjq7"] Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.176325 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxjq7" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="registry-server" containerID="cri-o://7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8" gracePeriod=2 Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.548625 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.571510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-catalog-content\") pod \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.571594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-utilities\") pod \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.571629 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fp5c\" (UniqueName: \"kubernetes.io/projected/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-kube-api-access-7fp5c\") pod \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\" (UID: \"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f\") " Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.573085 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-utilities" (OuterVolumeSpecName: "utilities") pod "598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" (UID: "598ac0b4-5fa9-4e97-9939-a721ba0e6e0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.577906 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-kube-api-access-7fp5c" (OuterVolumeSpecName: "kube-api-access-7fp5c") pod "598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" (UID: "598ac0b4-5fa9-4e97-9939-a721ba0e6e0f"). InnerVolumeSpecName "kube-api-access-7fp5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.594123 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" (UID: "598ac0b4-5fa9-4e97-9939-a721ba0e6e0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.673098 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fp5c\" (UniqueName: \"kubernetes.io/projected/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-kube-api-access-7fp5c\") on node \"crc\" DevicePath \"\"" Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.673155 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:58:10 crc kubenswrapper[4895]: I1206 08:58:10.673165 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.197433 4895 generic.go:334] "Generic (PLEG): container finished" podID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerID="7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8" exitCode=0 Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.197518 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxjq7" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.197518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxjq7" event={"ID":"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f","Type":"ContainerDied","Data":"7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8"} Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.197571 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxjq7" event={"ID":"598ac0b4-5fa9-4e97-9939-a721ba0e6e0f","Type":"ContainerDied","Data":"4d4a1fbdb19ed4a03692db2eae195eacb468f78b91ee5eb831411b7349cf633b"} Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.197597 4895 scope.go:117] "RemoveContainer" containerID="7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.233896 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxjq7"] Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.238107 4895 scope.go:117] "RemoveContainer" containerID="c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.242282 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxjq7"] Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.287616 4895 scope.go:117] "RemoveContainer" containerID="14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.324673 4895 scope.go:117] "RemoveContainer" containerID="7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8" Dec 06 08:58:11 crc kubenswrapper[4895]: E1206 08:58:11.334030 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8\": container with ID starting with 7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8 not found: ID does not exist" containerID="7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.334067 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8"} err="failed to get container status \"7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8\": rpc error: code = NotFound desc = could not find container \"7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8\": container with ID starting with 7893ab99197be086ee37b56198f78abee1415e16fb22a083a1f6a092b68200b8 not found: ID does not exist" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.334099 4895 scope.go:117] "RemoveContainer" containerID="c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b" Dec 06 08:58:11 crc kubenswrapper[4895]: E1206 08:58:11.339616 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b\": container with ID starting with c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b not found: ID does not exist" containerID="c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.339662 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b"} err="failed to get container status \"c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b\": rpc error: code = NotFound desc = could not find container \"c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b\": container with ID starting with c0bb20cd31ccefca50eb0a980954cfe0543197e42d1b61a59e45d66a98eb2e3b not found: ID does not exist" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.339689 4895 scope.go:117] "RemoveContainer" containerID="14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6" Dec 06 08:58:11 crc kubenswrapper[4895]: E1206 08:58:11.340094 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6\": container with ID starting with 14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6 not found: ID does not exist" containerID="14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6" Dec 06 08:58:11 crc kubenswrapper[4895]: I1206 08:58:11.340137 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6"} err="failed to get container status \"14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6\": rpc error: code = NotFound desc = could not find container \"14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6\": container with ID starting with 14096f1d9261152971927b259c23cb991140f5bc6be3c4eb5214444d45da3cf6 not found: ID does not exist" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.059180 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" path="/var/lib/kubelet/pods/598ac0b4-5fa9-4e97-9939-a721ba0e6e0f/volumes" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.523821 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Dec 06 08:58:12 crc kubenswrapper[4895]: E1206 08:58:12.524148 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="registry-server" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.524162 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="registry-server" Dec 06 08:58:12 crc kubenswrapper[4895]: E1206 08:58:12.524178 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="extract-content" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.524185 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="extract-content" Dec 06 08:58:12 crc kubenswrapper[4895]: E1206 08:58:12.524208 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="extract-utilities" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.524215 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="extract-utilities" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.524359 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="598ac0b4-5fa9-4e97-9939-a721ba0e6e0f" containerName="registry-server" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.524864 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.527458 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-z2v77" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.533345 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.711167 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vw2\" (UniqueName: \"kubernetes.io/projected/b60168b3-093c-4593-aa01-4840a4a50963-kube-api-access-l8vw2\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.711225 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0b333292-9471-4959-b879-0cd798ecffba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.812715 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vw2\" (UniqueName: \"kubernetes.io/projected/b60168b3-093c-4593-aa01-4840a4a50963-kube-api-access-l8vw2\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.812755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0b333292-9471-4959-b879-0cd798ecffba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.815403 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.815446 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0b333292-9471-4959-b879-0cd798ecffba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7dd567cef2c8d9c1d73d546fa4cde4d2d6f944cfcc318a2e7171aaf47ad411f/globalmount\"" pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.832161 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vw2\" (UniqueName: \"kubernetes.io/projected/b60168b3-093c-4593-aa01-4840a4a50963-kube-api-access-l8vw2\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " pod="openstack/mariadb-copy-data" Dec 06 08:58:12 crc kubenswrapper[4895]: I1206 08:58:12.846446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0b333292-9471-4959-b879-0cd798ecffba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") pod \"mariadb-copy-data\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " pod="openstack/mariadb-copy-data" Dec 06 08:58:13 crc kubenswrapper[4895]: I1206 08:58:13.148118 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 06 08:58:13 crc kubenswrapper[4895]: I1206 08:58:13.672792 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 06 08:58:14 crc kubenswrapper[4895]: I1206 08:58:14.225403 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b60168b3-093c-4593-aa01-4840a4a50963","Type":"ContainerStarted","Data":"67e5925c439ebd7fa889168ad2c63edb5f1d86923e2f33362a0f615fdba0cc16"} Dec 06 08:58:14 crc kubenswrapper[4895]: I1206 08:58:14.225758 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b60168b3-093c-4593-aa01-4840a4a50963","Type":"ContainerStarted","Data":"6c8022644b5ebd3f6ef7720dabcf35447d806a8cb0ddb508387c34e02eaeb622"} Dec 06 08:58:14 crc kubenswrapper[4895]: I1206 08:58:14.245284 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.245265313 podStartE2EDuration="3.245265313s" podCreationTimestamp="2025-12-06 08:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:58:14.244154963 +0000 UTC m=+7256.645543823" watchObservedRunningTime="2025-12-06 08:58:14.245265313 +0000 UTC m=+7256.646654193" Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.636650 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.639603 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.655725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.793087 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5jr\" (UniqueName: \"kubernetes.io/projected/6ef1e068-309b-40b3-ac16-b3d9333ee6b0-kube-api-access-ws5jr\") pod \"mariadb-client\" (UID: \"6ef1e068-309b-40b3-ac16-b3d9333ee6b0\") " pod="openstack/mariadb-client" Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.895169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws5jr\" (UniqueName: \"kubernetes.io/projected/6ef1e068-309b-40b3-ac16-b3d9333ee6b0-kube-api-access-ws5jr\") pod \"mariadb-client\" (UID: \"6ef1e068-309b-40b3-ac16-b3d9333ee6b0\") " pod="openstack/mariadb-client" Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.915924 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws5jr\" (UniqueName: \"kubernetes.io/projected/6ef1e068-309b-40b3-ac16-b3d9333ee6b0-kube-api-access-ws5jr\") pod \"mariadb-client\" (UID: \"6ef1e068-309b-40b3-ac16-b3d9333ee6b0\") " pod="openstack/mariadb-client" Dec 06 08:58:17 crc kubenswrapper[4895]: I1206 08:58:17.976685 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:18 crc kubenswrapper[4895]: I1206 08:58:18.462044 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:18 crc kubenswrapper[4895]: W1206 08:58:18.462140 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef1e068_309b_40b3_ac16_b3d9333ee6b0.slice/crio-2577b3661c9d99225fed99a6c8270f0b9ab9fc1fe2a55887fc63e25082275ba4 WatchSource:0}: Error finding container 2577b3661c9d99225fed99a6c8270f0b9ab9fc1fe2a55887fc63e25082275ba4: Status 404 returned error can't find the container with id 2577b3661c9d99225fed99a6c8270f0b9ab9fc1fe2a55887fc63e25082275ba4 Dec 06 08:58:19 crc kubenswrapper[4895]: I1206 08:58:19.271527 4895 generic.go:334] "Generic (PLEG): container finished" podID="6ef1e068-309b-40b3-ac16-b3d9333ee6b0" containerID="3aae9c80283bb3dbc544bbb1942cb08dc71bf73e99ee8c198ff066f30c1eb0be" exitCode=0 Dec 06 08:58:19 crc kubenswrapper[4895]: I1206 08:58:19.271715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6ef1e068-309b-40b3-ac16-b3d9333ee6b0","Type":"ContainerDied","Data":"3aae9c80283bb3dbc544bbb1942cb08dc71bf73e99ee8c198ff066f30c1eb0be"} Dec 06 08:58:19 crc kubenswrapper[4895]: I1206 08:58:19.271947 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6ef1e068-309b-40b3-ac16-b3d9333ee6b0","Type":"ContainerStarted","Data":"2577b3661c9d99225fed99a6c8270f0b9ab9fc1fe2a55887fc63e25082275ba4"} Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.643652 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.666280 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_6ef1e068-309b-40b3-ac16-b3d9333ee6b0/mariadb-client/0.log" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.696025 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.705357 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.741567 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws5jr\" (UniqueName: \"kubernetes.io/projected/6ef1e068-309b-40b3-ac16-b3d9333ee6b0-kube-api-access-ws5jr\") pod \"6ef1e068-309b-40b3-ac16-b3d9333ee6b0\" (UID: \"6ef1e068-309b-40b3-ac16-b3d9333ee6b0\") " Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.746529 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef1e068-309b-40b3-ac16-b3d9333ee6b0-kube-api-access-ws5jr" (OuterVolumeSpecName: "kube-api-access-ws5jr") pod "6ef1e068-309b-40b3-ac16-b3d9333ee6b0" (UID: "6ef1e068-309b-40b3-ac16-b3d9333ee6b0"). InnerVolumeSpecName "kube-api-access-ws5jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.822516 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:20 crc kubenswrapper[4895]: E1206 08:58:20.823034 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef1e068-309b-40b3-ac16-b3d9333ee6b0" containerName="mariadb-client" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.823095 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef1e068-309b-40b3-ac16-b3d9333ee6b0" containerName="mariadb-client" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.823299 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef1e068-309b-40b3-ac16-b3d9333ee6b0" containerName="mariadb-client" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.824226 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.830900 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.843638 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws5jr\" (UniqueName: \"kubernetes.io/projected/6ef1e068-309b-40b3-ac16-b3d9333ee6b0-kube-api-access-ws5jr\") on node \"crc\" DevicePath \"\"" Dec 06 08:58:20 crc kubenswrapper[4895]: I1206 08:58:20.945335 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm6lf\" (UniqueName: \"kubernetes.io/projected/b43becdf-6153-48a7-8d00-dd65e30222f2-kube-api-access-sm6lf\") pod \"mariadb-client\" (UID: \"b43becdf-6153-48a7-8d00-dd65e30222f2\") " pod="openstack/mariadb-client" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.047296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm6lf\" (UniqueName: \"kubernetes.io/projected/b43becdf-6153-48a7-8d00-dd65e30222f2-kube-api-access-sm6lf\") pod \"mariadb-client\" (UID: \"b43becdf-6153-48a7-8d00-dd65e30222f2\") " pod="openstack/mariadb-client" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.061991 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm6lf\" (UniqueName: \"kubernetes.io/projected/b43becdf-6153-48a7-8d00-dd65e30222f2-kube-api-access-sm6lf\") pod \"mariadb-client\" (UID: \"b43becdf-6153-48a7-8d00-dd65e30222f2\") " pod="openstack/mariadb-client" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.145148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.290908 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2577b3661c9d99225fed99a6c8270f0b9ab9fc1fe2a55887fc63e25082275ba4" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.290959 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.310228 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="6ef1e068-309b-40b3-ac16-b3d9333ee6b0" podUID="b43becdf-6153-48a7-8d00-dd65e30222f2" Dec 06 08:58:21 crc kubenswrapper[4895]: I1206 08:58:21.603597 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:21 crc kubenswrapper[4895]: W1206 08:58:21.605100 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb43becdf_6153_48a7_8d00_dd65e30222f2.slice/crio-047f1386e964522b14ae08e6b706562dba95b9be4b7c4db3d1c6325c94745af0 WatchSource:0}: Error finding container 047f1386e964522b14ae08e6b706562dba95b9be4b7c4db3d1c6325c94745af0: Status 404 returned error can't find the container with id 047f1386e964522b14ae08e6b706562dba95b9be4b7c4db3d1c6325c94745af0 Dec 06 08:58:22 crc kubenswrapper[4895]: I1206 08:58:22.060401 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef1e068-309b-40b3-ac16-b3d9333ee6b0" path="/var/lib/kubelet/pods/6ef1e068-309b-40b3-ac16-b3d9333ee6b0/volumes" Dec 06 08:58:22 crc kubenswrapper[4895]: I1206 08:58:22.301118 4895 generic.go:334] "Generic (PLEG): container finished" podID="b43becdf-6153-48a7-8d00-dd65e30222f2" containerID="569149fdafb34a40b6f4035758c89d1d6653337d17a9c4d9dc18223ad306b6db" exitCode=0 Dec 06 08:58:22 crc kubenswrapper[4895]: I1206 08:58:22.301183 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b43becdf-6153-48a7-8d00-dd65e30222f2","Type":"ContainerDied","Data":"569149fdafb34a40b6f4035758c89d1d6653337d17a9c4d9dc18223ad306b6db"} Dec 06 08:58:22 crc kubenswrapper[4895]: I1206 08:58:22.301224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b43becdf-6153-48a7-8d00-dd65e30222f2","Type":"ContainerStarted","Data":"047f1386e964522b14ae08e6b706562dba95b9be4b7c4db3d1c6325c94745af0"} Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.625285 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.641933 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_b43becdf-6153-48a7-8d00-dd65e30222f2/mariadb-client/0.log" Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.671038 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.676125 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.793347 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm6lf\" (UniqueName: \"kubernetes.io/projected/b43becdf-6153-48a7-8d00-dd65e30222f2-kube-api-access-sm6lf\") pod \"b43becdf-6153-48a7-8d00-dd65e30222f2\" (UID: \"b43becdf-6153-48a7-8d00-dd65e30222f2\") " Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.805794 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43becdf-6153-48a7-8d00-dd65e30222f2-kube-api-access-sm6lf" (OuterVolumeSpecName: "kube-api-access-sm6lf") pod "b43becdf-6153-48a7-8d00-dd65e30222f2" (UID: "b43becdf-6153-48a7-8d00-dd65e30222f2"). InnerVolumeSpecName "kube-api-access-sm6lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:58:23 crc kubenswrapper[4895]: I1206 08:58:23.895077 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm6lf\" (UniqueName: \"kubernetes.io/projected/b43becdf-6153-48a7-8d00-dd65e30222f2-kube-api-access-sm6lf\") on node \"crc\" DevicePath \"\"" Dec 06 08:58:24 crc kubenswrapper[4895]: I1206 08:58:24.060339 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43becdf-6153-48a7-8d00-dd65e30222f2" path="/var/lib/kubelet/pods/b43becdf-6153-48a7-8d00-dd65e30222f2/volumes" Dec 06 08:58:24 crc kubenswrapper[4895]: I1206 08:58:24.317077 4895 scope.go:117] "RemoveContainer" containerID="569149fdafb34a40b6f4035758c89d1d6653337d17a9c4d9dc18223ad306b6db" Dec 06 08:58:24 crc kubenswrapper[4895]: I1206 08:58:24.317132 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 06 08:58:29 crc kubenswrapper[4895]: I1206 08:58:29.695750 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:58:29 crc kubenswrapper[4895]: I1206 08:58:29.696425 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:58:29 crc kubenswrapper[4895]: I1206 08:58:29.696497 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 08:58:29 crc kubenswrapper[4895]: I1206 08:58:29.697217 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:58:29 crc kubenswrapper[4895]: I1206 08:58:29.697272 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" gracePeriod=600 Dec 06 08:58:29 crc kubenswrapper[4895]: E1206 08:58:29.820031 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:58:30 crc kubenswrapper[4895]: I1206 08:58:30.391311 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" exitCode=0 Dec 06 08:58:30 crc kubenswrapper[4895]: I1206 08:58:30.391396 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e"} Dec 06 08:58:30 crc kubenswrapper[4895]: I1206 08:58:30.391442 4895 scope.go:117] "RemoveContainer" containerID="4f47c8fbdf2a09a4c030ceeb8af41e19bb9c9bbd34ca05a32fb364cd0a5c3936" Dec 06 08:58:30 crc kubenswrapper[4895]: I1206 08:58:30.392250 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:58:30 crc kubenswrapper[4895]: E1206 08:58:30.392701 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:58:46 crc kubenswrapper[4895]: I1206 08:58:46.051252 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:58:46 crc kubenswrapper[4895]: E1206 08:58:46.052438 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.942124 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 08:58:50 crc kubenswrapper[4895]: E1206 08:58:50.942922 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43becdf-6153-48a7-8d00-dd65e30222f2" containerName="mariadb-client" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.942959 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43becdf-6153-48a7-8d00-dd65e30222f2" containerName="mariadb-client" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.943123 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43becdf-6153-48a7-8d00-dd65e30222f2" containerName="mariadb-client" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.943879 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.945765 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vwbw6" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.946402 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.947068 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.959657 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.988528 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.989828 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.990780 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.990931 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:50 crc kubenswrapper[4895]: I1206 08:58:50.991308 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.005878 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2900dffc-9406-4d7c-861c-c22163ddee06-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9mj\" (UniqueName: \"kubernetes.io/projected/2900dffc-9406-4d7c-861c-c22163ddee06-kube-api-access-bx9mj\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047439 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917ae70f-856c-4a47-847b-bd0775476f16-config\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2900dffc-9406-4d7c-861c-c22163ddee06-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w979l\" (UniqueName: \"kubernetes.io/projected/917ae70f-856c-4a47-847b-bd0775476f16-kube-api-access-w979l\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047717 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047823 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917ae70f-856c-4a47-847b-bd0775476f16-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047857 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047893 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047934 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.047974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/917ae70f-856c-4a47-847b-bd0775476f16-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.048002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.048034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpd7\" (UniqueName: \"kubernetes.io/projected/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-kube-api-access-xlpd7\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.048075 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2900dffc-9406-4d7c-861c-c22163ddee06-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.048116 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917ae70f-856c-4a47-847b-bd0775476f16-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.048197 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2900dffc-9406-4d7c-861c-c22163ddee06-config\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149118 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149183 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/917ae70f-856c-4a47-847b-bd0775476f16-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149214 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149242 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpd7\" (UniqueName: \"kubernetes.io/projected/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-kube-api-access-xlpd7\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2900dffc-9406-4d7c-861c-c22163ddee06-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149305 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917ae70f-856c-4a47-847b-bd0775476f16-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2900dffc-9406-4d7c-861c-c22163ddee06-config\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149387 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2900dffc-9406-4d7c-861c-c22163ddee06-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149413 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx9mj\" (UniqueName: \"kubernetes.io/projected/2900dffc-9406-4d7c-861c-c22163ddee06-kube-api-access-bx9mj\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149439 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149465 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917ae70f-856c-4a47-847b-bd0775476f16-config\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149533 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2900dffc-9406-4d7c-861c-c22163ddee06-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149555 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w979l\" (UniqueName: \"kubernetes.io/projected/917ae70f-856c-4a47-847b-bd0775476f16-kube-api-access-w979l\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149597 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149618 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917ae70f-856c-4a47-847b-bd0775476f16-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149665 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.149704 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.150697 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/917ae70f-856c-4a47-847b-bd0775476f16-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.150930 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.150991 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917ae70f-856c-4a47-847b-bd0775476f16-config\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.151543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.151675 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.152159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2900dffc-9406-4d7c-861c-c22163ddee06-config\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.152504 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917ae70f-856c-4a47-847b-bd0775476f16-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.153591 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2900dffc-9406-4d7c-861c-c22163ddee06-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.154842 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2900dffc-9406-4d7c-861c-c22163ddee06-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.154934 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.161551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.163296 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.167304 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917ae70f-856c-4a47-847b-bd0775476f16-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.170741 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.171951 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.201692 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gmn8s" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.201979 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.202187 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.202238 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2900dffc-9406-4d7c-861c-c22163ddee06-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.214538 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.214580 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/65a1273344ec351cea65e516f17cf1e13c641a066222e710b18c09f1944008d9/globalmount\"" pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.215101 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.215156 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/202036c60cc90628c7df2a537fa85670dd1288189305782f7de7b012b0c9834e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.216281 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w979l\" (UniqueName: \"kubernetes.io/projected/917ae70f-856c-4a47-847b-bd0775476f16-kube-api-access-w979l\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.217990 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.218134 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c775c0e5ecb04d4b4c26be29d701ca38b3ac2a4bce3418ed5e57dfcf4d6df836/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.318770 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx9mj\" (UniqueName: \"kubernetes.io/projected/2900dffc-9406-4d7c-861c-c22163ddee06-kube-api-access-bx9mj\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.337938 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.339721 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.348034 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.349644 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.350455 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpd7\" (UniqueName: \"kubernetes.io/projected/fcb2414c-554f-4458-b0ae-aa4b7e928c7f-kube-api-access-xlpd7\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.393638 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2fj\" (UniqueName: \"kubernetes.io/projected/9144e817-5aaa-4369-83e3-eca7e57c2b4a-kube-api-access-lg2fj\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.393958 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9144e817-5aaa-4369-83e3-eca7e57c2b4a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9144e817-5aaa-4369-83e3-eca7e57c2b4a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394536 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00e078e3-f605-4323-9a64-9868070a17ae-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394567 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e078e3-f605-4323-9a64-9868070a17ae-config\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394610 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144e817-5aaa-4369-83e3-eca7e57c2b4a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394688 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9144e817-5aaa-4369-83e3-eca7e57c2b4a-config\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e078e3-f605-4323-9a64-9868070a17ae-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394829 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xjn9\" (UniqueName: \"kubernetes.io/projected/00e078e3-f605-4323-9a64-9868070a17ae-kube-api-access-6xjn9\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394868 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00e078e3-f605-4323-9a64-9868070a17ae-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.394947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.396439 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.400347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fdb2228-a0cb-4e25-87cf-2798e476ef9d\") pod \"ovsdbserver-nb-0\" (UID: \"2900dffc-9406-4d7c-861c-c22163ddee06\") " pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.404153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-165f98f3-78f6-4f7d-b712-dc098f21cdf2\") pod \"ovsdbserver-nb-1\" (UID: \"fcb2414c-554f-4458-b0ae-aa4b7e928c7f\") " pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.404360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d9553b7-639c-4ba0-9f87-8b4ef697b1b6\") pod \"ovsdbserver-nb-2\" (UID: \"917ae70f-856c-4a47-847b-bd0775476f16\") " pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.496863 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9144e817-5aaa-4369-83e3-eca7e57c2b4a-config\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.496924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.496959 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e078e3-f605-4323-9a64-9868070a17ae-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.496979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xjn9\" (UniqueName: \"kubernetes.io/projected/00e078e3-f605-4323-9a64-9868070a17ae-kube-api-access-6xjn9\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497001 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00e078e3-f605-4323-9a64-9868070a17ae-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ckk\" (UniqueName: \"kubernetes.io/projected/f03dfbd7-16e7-4669-9372-36f6adba5fab-kube-api-access-t7ckk\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497094 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f03dfbd7-16e7-4669-9372-36f6adba5fab-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497123 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg2fj\" (UniqueName: \"kubernetes.io/projected/9144e817-5aaa-4369-83e3-eca7e57c2b4a-kube-api-access-lg2fj\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497146 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03dfbd7-16e7-4669-9372-36f6adba5fab-config\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497165 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03dfbd7-16e7-4669-9372-36f6adba5fab-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9144e817-5aaa-4369-83e3-eca7e57c2b4a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497204 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9144e817-5aaa-4369-83e3-eca7e57c2b4a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00e078e3-f605-4323-9a64-9868070a17ae-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e078e3-f605-4323-9a64-9868070a17ae-config\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144e817-5aaa-4369-83e3-eca7e57c2b4a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.497283 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f03dfbd7-16e7-4669-9372-36f6adba5fab-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.498243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00e078e3-f605-4323-9a64-9868070a17ae-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.499035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9144e817-5aaa-4369-83e3-eca7e57c2b4a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.499167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9144e817-5aaa-4369-83e3-eca7e57c2b4a-config\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.499317 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9144e817-5aaa-4369-83e3-eca7e57c2b4a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.501027 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.501095 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d5101688bc3daf56bdd9e9f47f630480f8183fb4e593974e3481df356a6b047c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.501151 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e078e3-f605-4323-9a64-9868070a17ae-config\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.501553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00e078e3-f605-4323-9a64-9868070a17ae-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.502194 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.502415 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0db98bd50fe4c1ab55d1f2a1e21144e2dbb4d1ba74aef18dfd8b67ce0c0f0a0a/globalmount\"" pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.502449 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144e817-5aaa-4369-83e3-eca7e57c2b4a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.504216 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e078e3-f605-4323-9a64-9868070a17ae-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.512667 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xjn9\" (UniqueName: \"kubernetes.io/projected/00e078e3-f605-4323-9a64-9868070a17ae-kube-api-access-6xjn9\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.514197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg2fj\" (UniqueName: \"kubernetes.io/projected/9144e817-5aaa-4369-83e3-eca7e57c2b4a-kube-api-access-lg2fj\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.528558 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ca1a782-e226-4807-aeed-e3dcc827ed0a\") pod \"ovsdbserver-sb-1\" (UID: \"00e078e3-f605-4323-9a64-9868070a17ae\") " pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.528969 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58a438-dd8d-4c90-b24f-d5742bb4790c\") pod \"ovsdbserver-sb-0\" (UID: \"9144e817-5aaa-4369-83e3-eca7e57c2b4a\") " pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.598953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f03dfbd7-16e7-4669-9372-36f6adba5fab-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.599056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.599157 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7ckk\" (UniqueName: \"kubernetes.io/projected/f03dfbd7-16e7-4669-9372-36f6adba5fab-kube-api-access-t7ckk\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.599182 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f03dfbd7-16e7-4669-9372-36f6adba5fab-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.599227 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03dfbd7-16e7-4669-9372-36f6adba5fab-config\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.599254 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03dfbd7-16e7-4669-9372-36f6adba5fab-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.600359 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f03dfbd7-16e7-4669-9372-36f6adba5fab-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.600584 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f03dfbd7-16e7-4669-9372-36f6adba5fab-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.601175 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03dfbd7-16e7-4669-9372-36f6adba5fab-config\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.603396 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.603437 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/228644d2d005772fce116581281344ffd76e7617ac4d3f032d28457e0a5d9a8d/globalmount\"" pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.604939 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03dfbd7-16e7-4669-9372-36f6adba5fab-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.614291 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.623354 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.624820 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7ckk\" (UniqueName: \"kubernetes.io/projected/f03dfbd7-16e7-4669-9372-36f6adba5fab-kube-api-access-t7ckk\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.635178 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.635510 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-015058c3-075a-485d-a4c9-5af4704e5e9d\") pod \"ovsdbserver-sb-2\" (UID: \"f03dfbd7-16e7-4669-9372-36f6adba5fab\") " pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.740187 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.750293 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:51 crc kubenswrapper[4895]: I1206 08:58:51.759704 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.246578 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.311039 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.418074 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 06 08:58:52 crc kubenswrapper[4895]: W1206 08:58:52.423503 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00e078e3_f605_4323_9a64_9868070a17ae.slice/crio-677d416f4a739145eaa7758c9b2f5834bda2861f02d253297bdac1bff42a98fa WatchSource:0}: Error finding container 677d416f4a739145eaa7758c9b2f5834bda2861f02d253297bdac1bff42a98fa: Status 404 returned error can't find the container with id 677d416f4a739145eaa7758c9b2f5834bda2861f02d253297bdac1bff42a98fa Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.517015 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 08:58:52 crc kubenswrapper[4895]: W1206 08:58:52.523325 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9144e817_5aaa_4369_83e3_eca7e57c2b4a.slice/crio-4422ddb68332b2b478c4c34403c8eeace4fd76254f17d84f4941bf61984584b0 WatchSource:0}: Error finding container 4422ddb68332b2b478c4c34403c8eeace4fd76254f17d84f4941bf61984584b0: Status 404 returned error can't find the container with id 4422ddb68332b2b478c4c34403c8eeace4fd76254f17d84f4941bf61984584b0 Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.574417 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"00e078e3-f605-4323-9a64-9868070a17ae","Type":"ContainerStarted","Data":"677d416f4a739145eaa7758c9b2f5834bda2861f02d253297bdac1bff42a98fa"} Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.576192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcb2414c-554f-4458-b0ae-aa4b7e928c7f","Type":"ContainerStarted","Data":"06cc699192ad6ea31b38d6aeba36893918ca73186adc234a5de3cfd84a7602be"} Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.577897 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9144e817-5aaa-4369-83e3-eca7e57c2b4a","Type":"ContainerStarted","Data":"4422ddb68332b2b478c4c34403c8eeace4fd76254f17d84f4941bf61984584b0"} Dec 06 08:58:52 crc kubenswrapper[4895]: I1206 08:58:52.579718 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2900dffc-9406-4d7c-861c-c22163ddee06","Type":"ContainerStarted","Data":"c0263ce77bf563296d9692c97af8f5d3b1fe1ca59970c85204b36b27d740599c"} Dec 06 08:58:53 crc kubenswrapper[4895]: I1206 08:58:53.124217 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 06 08:58:53 crc kubenswrapper[4895]: W1206 08:58:53.128906 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03dfbd7_16e7_4669_9372_36f6adba5fab.slice/crio-ab3abb1093927b4e4bcaa3f08be972b7943138c0a9cdf30456e2bce906dcecb7 WatchSource:0}: Error finding container ab3abb1093927b4e4bcaa3f08be972b7943138c0a9cdf30456e2bce906dcecb7: Status 404 returned error can't find the container with id ab3abb1093927b4e4bcaa3f08be972b7943138c0a9cdf30456e2bce906dcecb7 Dec 06 08:58:53 crc kubenswrapper[4895]: W1206 08:58:53.306720 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917ae70f_856c_4a47_847b_bd0775476f16.slice/crio-a73f8a2340aba23789f4e61c20bee04879cc29b942180f3f3e59f2bd4168a6ee WatchSource:0}: Error finding container a73f8a2340aba23789f4e61c20bee04879cc29b942180f3f3e59f2bd4168a6ee: Status 404 returned error can't find the container with id a73f8a2340aba23789f4e61c20bee04879cc29b942180f3f3e59f2bd4168a6ee Dec 06 08:58:53 crc kubenswrapper[4895]: I1206 08:58:53.313766 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 06 08:58:53 crc kubenswrapper[4895]: I1206 08:58:53.594174 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"f03dfbd7-16e7-4669-9372-36f6adba5fab","Type":"ContainerStarted","Data":"ab3abb1093927b4e4bcaa3f08be972b7943138c0a9cdf30456e2bce906dcecb7"} Dec 06 08:58:53 crc kubenswrapper[4895]: I1206 08:58:53.595820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"917ae70f-856c-4a47-847b-bd0775476f16","Type":"ContainerStarted","Data":"a73f8a2340aba23789f4e61c20bee04879cc29b942180f3f3e59f2bd4168a6ee"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.657676 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"f03dfbd7-16e7-4669-9372-36f6adba5fab","Type":"ContainerStarted","Data":"0f63f84366ba7d27a8374425b0223b5dd326037ee9209a44e5c44daa9042e433"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.658296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"f03dfbd7-16e7-4669-9372-36f6adba5fab","Type":"ContainerStarted","Data":"7e0da9c3767aba3b1d84fccde424edaf08fc5409461fcc2c696300630687157f"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.660704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9144e817-5aaa-4369-83e3-eca7e57c2b4a","Type":"ContainerStarted","Data":"28958f64c0ef20352ee305c702b39f3d79bd2daf5c514999e76bf16ea3b0a678"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.660743 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9144e817-5aaa-4369-83e3-eca7e57c2b4a","Type":"ContainerStarted","Data":"6e68e2a3a838aa9c9c318790ce47cee7f7abba561a849dace1c2e547517fd6f3"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.663114 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"917ae70f-856c-4a47-847b-bd0775476f16","Type":"ContainerStarted","Data":"bf8f42ac5598a7f528bf65228b822027d28077d7e67c543c2ff3a07b3e57739e"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.663145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"917ae70f-856c-4a47-847b-bd0775476f16","Type":"ContainerStarted","Data":"3d6d18bb1db7da2e6155868463fdf0fbe25d8263f53389c9ac56ac97ccf239f7"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.665569 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2900dffc-9406-4d7c-861c-c22163ddee06","Type":"ContainerStarted","Data":"7bbee4be58f6a0dcbfdcc9b9f5cc516af429a6ef3cd81d8861db05205ebdc7ea"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.665615 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2900dffc-9406-4d7c-861c-c22163ddee06","Type":"ContainerStarted","Data":"3f438bc7aa15879ca2bea13fb0e1292f2641c11e5f143c14fbabc9a372accc43"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.667784 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"00e078e3-f605-4323-9a64-9868070a17ae","Type":"ContainerStarted","Data":"9e1c7df985d5ac7c14574eb4228a7a3d450d75d7d3ecfa4b7ed6568bafecbe10"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.667824 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"00e078e3-f605-4323-9a64-9868070a17ae","Type":"ContainerStarted","Data":"0e926a4c54f0c8ee4c14a2218c9cb73bf7c7d6c193878d989a49a4ba49ec52ab"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.670067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcb2414c-554f-4458-b0ae-aa4b7e928c7f","Type":"ContainerStarted","Data":"7a31c08549c4f8f514881d1331419b54a721a14d2f5dcc2521019690d150fff8"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.670102 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcb2414c-554f-4458-b0ae-aa4b7e928c7f","Type":"ContainerStarted","Data":"4a665e9951201099f9eb96bba0ccd5b1329e1e79742029f5825785bc743e09ae"} Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.686102 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.829786075 podStartE2EDuration="7.686080711s" podCreationTimestamp="2025-12-06 08:58:50 +0000 UTC" firstStartedPulling="2025-12-06 08:58:53.131079469 +0000 UTC m=+7295.532468339" lastFinishedPulling="2025-12-06 08:58:56.987374105 +0000 UTC m=+7299.388762975" observedRunningTime="2025-12-06 08:58:57.679326789 +0000 UTC m=+7300.080715669" watchObservedRunningTime="2025-12-06 08:58:57.686080711 +0000 UTC m=+7300.087469581" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.705421 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=5.028954937 podStartE2EDuration="8.705405259s" podCreationTimestamp="2025-12-06 08:58:49 +0000 UTC" firstStartedPulling="2025-12-06 08:58:53.311283192 +0000 UTC m=+7295.712672062" lastFinishedPulling="2025-12-06 08:58:56.987733514 +0000 UTC m=+7299.389122384" observedRunningTime="2025-12-06 08:58:57.696688515 +0000 UTC m=+7300.098077385" watchObservedRunningTime="2025-12-06 08:58:57.705405259 +0000 UTC m=+7300.106794119" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.720275 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.159253584 podStartE2EDuration="7.720255097s" podCreationTimestamp="2025-12-06 08:58:50 +0000 UTC" firstStartedPulling="2025-12-06 08:58:52.425958061 +0000 UTC m=+7294.827346931" lastFinishedPulling="2025-12-06 08:58:56.986959574 +0000 UTC m=+7299.388348444" observedRunningTime="2025-12-06 08:58:57.71737158 +0000 UTC m=+7300.118760450" watchObservedRunningTime="2025-12-06 08:58:57.720255097 +0000 UTC m=+7300.121643987" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.737464 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.04680036 podStartE2EDuration="8.737441458s" podCreationTimestamp="2025-12-06 08:58:49 +0000 UTC" firstStartedPulling="2025-12-06 08:58:52.32816355 +0000 UTC m=+7294.729552420" lastFinishedPulling="2025-12-06 08:58:57.018804658 +0000 UTC m=+7299.420193518" observedRunningTime="2025-12-06 08:58:57.735053054 +0000 UTC m=+7300.136441924" watchObservedRunningTime="2025-12-06 08:58:57.737441458 +0000 UTC m=+7300.138830328" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.740353 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.750613 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.755224 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.043242865 podStartE2EDuration="8.755203695s" podCreationTimestamp="2025-12-06 08:58:49 +0000 UTC" firstStartedPulling="2025-12-06 08:58:52.274982404 +0000 UTC m=+7294.676371274" lastFinishedPulling="2025-12-06 08:58:56.986943224 +0000 UTC m=+7299.388332104" observedRunningTime="2025-12-06 08:58:57.751149856 +0000 UTC m=+7300.152538726" watchObservedRunningTime="2025-12-06 08:58:57.755203695 +0000 UTC m=+7300.156592555" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.759933 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Dec 06 08:58:57 crc kubenswrapper[4895]: I1206 08:58:57.773790 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.314901869 podStartE2EDuration="7.773764932s" podCreationTimestamp="2025-12-06 08:58:50 +0000 UTC" firstStartedPulling="2025-12-06 08:58:52.528856761 +0000 UTC m=+7294.930245631" lastFinishedPulling="2025-12-06 08:58:56.987719814 +0000 UTC m=+7299.389108694" observedRunningTime="2025-12-06 08:58:57.76696843 +0000 UTC m=+7300.168357300" watchObservedRunningTime="2025-12-06 08:58:57.773764932 +0000 UTC m=+7300.175153802" Dec 06 08:58:59 crc kubenswrapper[4895]: I1206 08:58:59.052300 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:58:59 crc kubenswrapper[4895]: E1206 08:58:59.053258 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.615491 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.624107 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.635919 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.662731 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.675024 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.675196 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.701656 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.701691 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.701702 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.775234 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.775683 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.790048 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.790711 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.798003 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Dec 06 08:59:00 crc kubenswrapper[4895]: I1206 08:59:00.798232 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.660038 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.679656 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.690694 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.801671 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.808307 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.811719 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.970157 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz"] Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.971919 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.976690 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 06 08:59:06 crc kubenswrapper[4895]: I1206 08:59:06.985258 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz"] Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.043520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-dns-svc\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.043579 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.043620 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-config\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.043822 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9szj\" (UniqueName: \"kubernetes.io/projected/cb0a0baa-9c23-475b-8271-e824c0a55a8a-kube-api-access-p9szj\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.126510 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz"] Dec 06 08:59:07 crc kubenswrapper[4895]: E1206 08:59:07.127075 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-p9szj ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" podUID="cb0a0baa-9c23-475b-8271-e824c0a55a8a" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.145639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9szj\" (UniqueName: \"kubernetes.io/projected/cb0a0baa-9c23-475b-8271-e824c0a55a8a-kube-api-access-p9szj\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.145751 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-dns-svc\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.145785 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.145827 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-config\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.147140 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.148326 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-config\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.148353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-dns-svc\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.151775 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668974d997-4cgm7"] Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.153207 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.157322 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.173121 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9szj\" (UniqueName: \"kubernetes.io/projected/cb0a0baa-9c23-475b-8271-e824c0a55a8a-kube-api-access-p9szj\") pod \"dnsmasq-dns-7f8ffbb5bc-b2vzz\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.190189 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668974d997-4cgm7"] Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.248312 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-sb\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.248669 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcbs\" (UniqueName: \"kubernetes.io/projected/8a2302c6-420d-434c-ba2c-bcd9c830c78b-kube-api-access-qpcbs\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.248742 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-nb\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.248778 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-config\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.248799 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-dns-svc\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.350313 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-nb\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.351232 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-nb\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.351402 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-config\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.352104 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-config\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.352180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-dns-svc\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.352267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-sb\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.352931 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-dns-svc\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.352955 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-sb\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.353069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpcbs\" (UniqueName: \"kubernetes.io/projected/8a2302c6-420d-434c-ba2c-bcd9c830c78b-kube-api-access-qpcbs\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.373373 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpcbs\" (UniqueName: \"kubernetes.io/projected/8a2302c6-420d-434c-ba2c-bcd9c830c78b-kube-api-access-qpcbs\") pod \"dnsmasq-dns-668974d997-4cgm7\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.479743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.787469 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.799598 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.861750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-ovsdbserver-nb\") pod \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.862214 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-config\") pod \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.862279 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-dns-svc\") pod \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.862299 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9szj\" (UniqueName: \"kubernetes.io/projected/cb0a0baa-9c23-475b-8271-e824c0a55a8a-kube-api-access-p9szj\") pod \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\" (UID: \"cb0a0baa-9c23-475b-8271-e824c0a55a8a\") " Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.862760 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb0a0baa-9c23-475b-8271-e824c0a55a8a" (UID: "cb0a0baa-9c23-475b-8271-e824c0a55a8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.862828 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-config" (OuterVolumeSpecName: "config") pod "cb0a0baa-9c23-475b-8271-e824c0a55a8a" (UID: "cb0a0baa-9c23-475b-8271-e824c0a55a8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.863447 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb0a0baa-9c23-475b-8271-e824c0a55a8a" (UID: "cb0a0baa-9c23-475b-8271-e824c0a55a8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.866685 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0a0baa-9c23-475b-8271-e824c0a55a8a-kube-api-access-p9szj" (OuterVolumeSpecName: "kube-api-access-p9szj") pod "cb0a0baa-9c23-475b-8271-e824c0a55a8a" (UID: "cb0a0baa-9c23-475b-8271-e824c0a55a8a"). InnerVolumeSpecName "kube-api-access-p9szj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.922604 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668974d997-4cgm7"] Dec 06 08:59:07 crc kubenswrapper[4895]: W1206 08:59:07.926832 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2302c6_420d_434c_ba2c_bcd9c830c78b.slice/crio-8f456ef13ee452e484bee0886899e3bc6379701accf8f3f711777393c5b8211c WatchSource:0}: Error finding container 8f456ef13ee452e484bee0886899e3bc6379701accf8f3f711777393c5b8211c: Status 404 returned error can't find the container with id 8f456ef13ee452e484bee0886899e3bc6379701accf8f3f711777393c5b8211c Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.964702 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-config\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.964742 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.964754 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9szj\" (UniqueName: \"kubernetes.io/projected/cb0a0baa-9c23-475b-8271-e824c0a55a8a-kube-api-access-p9szj\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:07 crc kubenswrapper[4895]: I1206 08:59:07.964765 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0a0baa-9c23-475b-8271-e824c0a55a8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:08 crc kubenswrapper[4895]: I1206 08:59:08.797538 4895 generic.go:334] "Generic (PLEG): container finished" podID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerID="f24f94f041a73f817a1119b179b1c13c17c721971d8599788eb150134f4d03cd" exitCode=0 Dec 06 08:59:08 crc kubenswrapper[4895]: I1206 08:59:08.797850 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz" Dec 06 08:59:08 crc kubenswrapper[4895]: I1206 08:59:08.797590 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668974d997-4cgm7" event={"ID":"8a2302c6-420d-434c-ba2c-bcd9c830c78b","Type":"ContainerDied","Data":"f24f94f041a73f817a1119b179b1c13c17c721971d8599788eb150134f4d03cd"} Dec 06 08:59:08 crc kubenswrapper[4895]: I1206 08:59:08.797906 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668974d997-4cgm7" event={"ID":"8a2302c6-420d-434c-ba2c-bcd9c830c78b","Type":"ContainerStarted","Data":"8f456ef13ee452e484bee0886899e3bc6379701accf8f3f711777393c5b8211c"} Dec 06 08:59:08 crc kubenswrapper[4895]: I1206 08:59:08.874511 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz"] Dec 06 08:59:08 crc kubenswrapper[4895]: I1206 08:59:08.881134 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f8ffbb5bc-b2vzz"] Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.589428 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.591551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.595257 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.605241 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.693980 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e601da3b-66d3-4777-b13b-f706562ac1df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.694203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/9f09b933-ea93-487e-ad6b-1bba2855d42c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.694237 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdz4\" (UniqueName: \"kubernetes.io/projected/9f09b933-ea93-487e-ad6b-1bba2855d42c-kube-api-access-9pdz4\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.795554 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e601da3b-66d3-4777-b13b-f706562ac1df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.795727 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/9f09b933-ea93-487e-ad6b-1bba2855d42c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.795769 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdz4\" (UniqueName: \"kubernetes.io/projected/9f09b933-ea93-487e-ad6b-1bba2855d42c-kube-api-access-9pdz4\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.799463 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.799528 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e601da3b-66d3-4777-b13b-f706562ac1df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d11ad898c01949b274a22f0a1b94478651d318b822bc6b82e2b4132f97c0b5f9/globalmount\"" pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.806832 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/9f09b933-ea93-487e-ad6b-1bba2855d42c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.815012 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668974d997-4cgm7" event={"ID":"8a2302c6-420d-434c-ba2c-bcd9c830c78b","Type":"ContainerStarted","Data":"b1a94569f61f961bda21a7e9cb91bbd211eead98247a5de1b350ee984abe7767"} Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.815181 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.820849 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdz4\" (UniqueName: \"kubernetes.io/projected/9f09b933-ea93-487e-ad6b-1bba2855d42c-kube-api-access-9pdz4\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.833384 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-668974d997-4cgm7" podStartSLOduration=2.833357088 podStartE2EDuration="2.833357088s" podCreationTimestamp="2025-12-06 08:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:59:09.830754058 +0000 UTC m=+7312.232142978" watchObservedRunningTime="2025-12-06 08:59:09.833357088 +0000 UTC m=+7312.234745998" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.846447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e601da3b-66d3-4777-b13b-f706562ac1df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") pod \"ovn-copy-data\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " pod="openstack/ovn-copy-data" Dec 06 08:59:09 crc kubenswrapper[4895]: I1206 08:59:09.927977 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 06 08:59:10 crc kubenswrapper[4895]: I1206 08:59:10.059205 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0a0baa-9c23-475b-8271-e824c0a55a8a" path="/var/lib/kubelet/pods/cb0a0baa-9c23-475b-8271-e824c0a55a8a/volumes" Dec 06 08:59:10 crc kubenswrapper[4895]: I1206 08:59:10.466180 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 06 08:59:10 crc kubenswrapper[4895]: W1206 08:59:10.468393 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f09b933_ea93_487e_ad6b_1bba2855d42c.slice/crio-0d9de8e3f8292a9eb560475484bfffea4363068a129731dc96c472e26495d9db WatchSource:0}: Error finding container 0d9de8e3f8292a9eb560475484bfffea4363068a129731dc96c472e26495d9db: Status 404 returned error can't find the container with id 0d9de8e3f8292a9eb560475484bfffea4363068a129731dc96c472e26495d9db Dec 06 08:59:10 crc kubenswrapper[4895]: I1206 08:59:10.829436 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"9f09b933-ea93-487e-ad6b-1bba2855d42c","Type":"ContainerStarted","Data":"0d9de8e3f8292a9eb560475484bfffea4363068a129731dc96c472e26495d9db"} Dec 06 08:59:11 crc kubenswrapper[4895]: I1206 08:59:11.050624 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:59:11 crc kubenswrapper[4895]: E1206 08:59:11.050899 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:59:11 crc kubenswrapper[4895]: I1206 08:59:11.840079 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"9f09b933-ea93-487e-ad6b-1bba2855d42c","Type":"ContainerStarted","Data":"dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495"} Dec 06 08:59:11 crc kubenswrapper[4895]: I1206 08:59:11.865346 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.660169542 podStartE2EDuration="3.865323154s" podCreationTimestamp="2025-12-06 08:59:08 +0000 UTC" firstStartedPulling="2025-12-06 08:59:10.472171227 +0000 UTC m=+7312.873560097" lastFinishedPulling="2025-12-06 08:59:10.677324799 +0000 UTC m=+7313.078713709" observedRunningTime="2025-12-06 08:59:11.855011598 +0000 UTC m=+7314.256400478" watchObservedRunningTime="2025-12-06 08:59:11.865323154 +0000 UTC m=+7314.266712034" Dec 06 08:59:17 crc kubenswrapper[4895]: I1206 08:59:17.481722 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:17 crc kubenswrapper[4895]: I1206 08:59:17.567180 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb"] Dec 06 08:59:17 crc kubenswrapper[4895]: I1206 08:59:17.567409 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerName="dnsmasq-dns" containerID="cri-o://31bb9e075211687b3e53580e31370331a172f355f1367740e8fdb9bcaaf1acc2" gracePeriod=10 Dec 06 08:59:17 crc kubenswrapper[4895]: I1206 08:59:17.905644 4895 generic.go:334] "Generic (PLEG): container finished" podID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerID="31bb9e075211687b3e53580e31370331a172f355f1367740e8fdb9bcaaf1acc2" exitCode=0 Dec 06 08:59:17 crc kubenswrapper[4895]: I1206 08:59:17.905700 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" event={"ID":"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e","Type":"ContainerDied","Data":"31bb9e075211687b3e53580e31370331a172f355f1367740e8fdb9bcaaf1acc2"} Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.582180 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.695068 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wzm\" (UniqueName: \"kubernetes.io/projected/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-kube-api-access-j2wzm\") pod \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.695291 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-dns-svc\") pod \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.695341 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-config\") pod \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\" (UID: \"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e\") " Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.712751 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-kube-api-access-j2wzm" (OuterVolumeSpecName: "kube-api-access-j2wzm") pod "d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" (UID: "d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e"). InnerVolumeSpecName "kube-api-access-j2wzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.784102 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" (UID: "d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.796964 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.797005 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wzm\" (UniqueName: \"kubernetes.io/projected/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-kube-api-access-j2wzm\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.800162 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-config" (OuterVolumeSpecName: "config") pod "d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" (UID: "d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.897567 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e-config\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.914227 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" event={"ID":"d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e","Type":"ContainerDied","Data":"a3b78cc68b6560d645b6cb396dd73a4dce1e58d5768d0f685b1d5f25b6165e73"} Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.914293 4895 scope.go:117] "RemoveContainer" containerID="31bb9e075211687b3e53580e31370331a172f355f1367740e8fdb9bcaaf1acc2" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.914413 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.934692 4895 scope.go:117] "RemoveContainer" containerID="82ceb4c618d804955a0ce7527d1a375013f7df7ca0dc2b0c408682ae5f3f06f8" Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.949810 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb"] Dec 06 08:59:18 crc kubenswrapper[4895]: I1206 08:59:18.955237 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f7f6bbcbf-pjmtb"] Dec 06 08:59:19 crc kubenswrapper[4895]: E1206 08:59:19.021812 4895 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.132:59296->38.129.56.132:44665: read tcp 38.129.56.132:59296->38.129.56.132:44665: read: connection reset by peer Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.765357 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 06 08:59:19 crc kubenswrapper[4895]: E1206 08:59:19.765788 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerName="dnsmasq-dns" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.765809 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerName="dnsmasq-dns" Dec 06 08:59:19 crc kubenswrapper[4895]: E1206 08:59:19.765829 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerName="init" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.765836 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerName="init" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.766018 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" containerName="dnsmasq-dns" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.767095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.772508 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.785950 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.791990 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-74wts" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.792187 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.830895 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.831186 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-scripts\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.831276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6q8\" (UniqueName: \"kubernetes.io/projected/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-kube-api-access-kn6q8\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.831405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.831536 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-config\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.933577 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.933626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-scripts\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.933653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn6q8\" (UniqueName: \"kubernetes.io/projected/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-kube-api-access-kn6q8\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.933705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.933735 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-config\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.934295 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.934752 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-config\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.934824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-scripts\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.944654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:19 crc kubenswrapper[4895]: I1206 08:59:19.949219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn6q8\" (UniqueName: \"kubernetes.io/projected/c244e581-3e70-4efe-84b5-3f41fb9fdaa0-kube-api-access-kn6q8\") pod \"ovn-northd-0\" (UID: \"c244e581-3e70-4efe-84b5-3f41fb9fdaa0\") " pod="openstack/ovn-northd-0" Dec 06 08:59:20 crc kubenswrapper[4895]: I1206 08:59:20.060401 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e" path="/var/lib/kubelet/pods/d77c7d1c-b3bb-4a47-bcf8-e6ddbaa7118e/volumes" Dec 06 08:59:20 crc kubenswrapper[4895]: I1206 08:59:20.095227 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 08:59:20 crc kubenswrapper[4895]: I1206 08:59:20.541869 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 08:59:20 crc kubenswrapper[4895]: I1206 08:59:20.934911 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c244e581-3e70-4efe-84b5-3f41fb9fdaa0","Type":"ContainerStarted","Data":"b63aefb21cce509a6b16c2a1516f11a323bfdc1b0e5f8717653086b40c0e6251"} Dec 06 08:59:21 crc kubenswrapper[4895]: I1206 08:59:21.944682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c244e581-3e70-4efe-84b5-3f41fb9fdaa0","Type":"ContainerStarted","Data":"8e3607453a11bd566548659aafeedb75da8295e30f56a0ab227f29c5354fb787"} Dec 06 08:59:21 crc kubenswrapper[4895]: I1206 08:59:21.944917 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c244e581-3e70-4efe-84b5-3f41fb9fdaa0","Type":"ContainerStarted","Data":"0bec56f43cfb8fac6991ffdee9eacd6fdd89e3adc4efd294339d28cd24491bbd"} Dec 06 08:59:21 crc kubenswrapper[4895]: I1206 08:59:21.944933 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 06 08:59:21 crc kubenswrapper[4895]: I1206 08:59:21.963943 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.229589745 podStartE2EDuration="2.963857445s" podCreationTimestamp="2025-12-06 08:59:19 +0000 UTC" firstStartedPulling="2025-12-06 08:59:20.549284593 +0000 UTC m=+7322.950673463" lastFinishedPulling="2025-12-06 08:59:21.283552293 +0000 UTC m=+7323.684941163" observedRunningTime="2025-12-06 08:59:21.962780086 +0000 UTC m=+7324.364169006" watchObservedRunningTime="2025-12-06 08:59:21.963857445 +0000 UTC m=+7324.365246335" Dec 06 08:59:23 crc kubenswrapper[4895]: I1206 08:59:23.050865 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:59:23 crc kubenswrapper[4895]: E1206 08:59:23.051227 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.171487 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-h9dcw"] Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.173162 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.176156 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfa4223-c0d1-4dbd-94cc-65f200123d19-operator-scripts\") pod \"keystone-db-create-h9dcw\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.176308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nzn\" (UniqueName: \"kubernetes.io/projected/1bfa4223-c0d1-4dbd-94cc-65f200123d19-kube-api-access-x6nzn\") pod \"keystone-db-create-h9dcw\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.178295 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h9dcw"] Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.261898 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0c77-account-create-update-tdrqp"] Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.263044 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.264979 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.269879 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c77-account-create-update-tdrqp"] Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.277181 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6nzn\" (UniqueName: \"kubernetes.io/projected/1bfa4223-c0d1-4dbd-94cc-65f200123d19-kube-api-access-x6nzn\") pod \"keystone-db-create-h9dcw\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.277247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfa4223-c0d1-4dbd-94cc-65f200123d19-operator-scripts\") pod \"keystone-db-create-h9dcw\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.278261 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfa4223-c0d1-4dbd-94cc-65f200123d19-operator-scripts\") pod \"keystone-db-create-h9dcw\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.295018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6nzn\" (UniqueName: \"kubernetes.io/projected/1bfa4223-c0d1-4dbd-94cc-65f200123d19-kube-api-access-x6nzn\") pod \"keystone-db-create-h9dcw\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.379349 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4pl\" (UniqueName: \"kubernetes.io/projected/366a0162-e23c-45f2-8c00-3718d0c8cfbf-kube-api-access-kt4pl\") pod \"keystone-0c77-account-create-update-tdrqp\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.379408 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366a0162-e23c-45f2-8c00-3718d0c8cfbf-operator-scripts\") pod \"keystone-0c77-account-create-update-tdrqp\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.480877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4pl\" (UniqueName: \"kubernetes.io/projected/366a0162-e23c-45f2-8c00-3718d0c8cfbf-kube-api-access-kt4pl\") pod \"keystone-0c77-account-create-update-tdrqp\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.480931 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366a0162-e23c-45f2-8c00-3718d0c8cfbf-operator-scripts\") pod \"keystone-0c77-account-create-update-tdrqp\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.481677 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366a0162-e23c-45f2-8c00-3718d0c8cfbf-operator-scripts\") pod \"keystone-0c77-account-create-update-tdrqp\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.495427 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.502166 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4pl\" (UniqueName: \"kubernetes.io/projected/366a0162-e23c-45f2-8c00-3718d0c8cfbf-kube-api-access-kt4pl\") pod \"keystone-0c77-account-create-update-tdrqp\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:27 crc kubenswrapper[4895]: I1206 08:59:27.577095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:28 crc kubenswrapper[4895]: I1206 08:59:28.024349 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h9dcw"] Dec 06 08:59:28 crc kubenswrapper[4895]: W1206 08:59:28.035877 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bfa4223_c0d1_4dbd_94cc_65f200123d19.slice/crio-382baf366959c3830b0d05120e73d97a621cddc6dcdb156534e2ca7997dc5cf2 WatchSource:0}: Error finding container 382baf366959c3830b0d05120e73d97a621cddc6dcdb156534e2ca7997dc5cf2: Status 404 returned error can't find the container with id 382baf366959c3830b0d05120e73d97a621cddc6dcdb156534e2ca7997dc5cf2 Dec 06 08:59:28 crc kubenswrapper[4895]: I1206 08:59:28.125567 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c77-account-create-update-tdrqp"] Dec 06 08:59:28 crc kubenswrapper[4895]: W1206 08:59:28.134976 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod366a0162_e23c_45f2_8c00_3718d0c8cfbf.slice/crio-2bfae365fc3dd1d847e93ce299e9c22f4eb505238fc87c3adc88f2df874cd676 WatchSource:0}: Error finding container 2bfae365fc3dd1d847e93ce299e9c22f4eb505238fc87c3adc88f2df874cd676: Status 404 returned error can't find the container with id 2bfae365fc3dd1d847e93ce299e9c22f4eb505238fc87c3adc88f2df874cd676 Dec 06 08:59:29 crc kubenswrapper[4895]: I1206 08:59:29.027764 4895 generic.go:334] "Generic (PLEG): container finished" podID="366a0162-e23c-45f2-8c00-3718d0c8cfbf" containerID="c7e7653d8002a71c3aeeda3730ce21b91798bf09784abfa7a1dd15aeff221918" exitCode=0 Dec 06 08:59:29 crc kubenswrapper[4895]: I1206 08:59:29.027899 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c77-account-create-update-tdrqp" event={"ID":"366a0162-e23c-45f2-8c00-3718d0c8cfbf","Type":"ContainerDied","Data":"c7e7653d8002a71c3aeeda3730ce21b91798bf09784abfa7a1dd15aeff221918"} Dec 06 08:59:29 crc kubenswrapper[4895]: I1206 08:59:29.028145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c77-account-create-update-tdrqp" event={"ID":"366a0162-e23c-45f2-8c00-3718d0c8cfbf","Type":"ContainerStarted","Data":"2bfae365fc3dd1d847e93ce299e9c22f4eb505238fc87c3adc88f2df874cd676"} Dec 06 08:59:29 crc kubenswrapper[4895]: I1206 08:59:29.030192 4895 generic.go:334] "Generic (PLEG): container finished" podID="1bfa4223-c0d1-4dbd-94cc-65f200123d19" containerID="04f79716124a94bf7f9282bddbc1df524e710ceee64800a23322dde82c9ec760" exitCode=0 Dec 06 08:59:29 crc kubenswrapper[4895]: I1206 08:59:29.030234 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h9dcw" event={"ID":"1bfa4223-c0d1-4dbd-94cc-65f200123d19","Type":"ContainerDied","Data":"04f79716124a94bf7f9282bddbc1df524e710ceee64800a23322dde82c9ec760"} Dec 06 08:59:29 crc kubenswrapper[4895]: I1206 08:59:29.030255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h9dcw" event={"ID":"1bfa4223-c0d1-4dbd-94cc-65f200123d19","Type":"ContainerStarted","Data":"382baf366959c3830b0d05120e73d97a621cddc6dcdb156534e2ca7997dc5cf2"} Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.402876 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.408341 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.529855 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfa4223-c0d1-4dbd-94cc-65f200123d19-operator-scripts\") pod \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.529914 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366a0162-e23c-45f2-8c00-3718d0c8cfbf-operator-scripts\") pod \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.530054 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6nzn\" (UniqueName: \"kubernetes.io/projected/1bfa4223-c0d1-4dbd-94cc-65f200123d19-kube-api-access-x6nzn\") pod \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\" (UID: \"1bfa4223-c0d1-4dbd-94cc-65f200123d19\") " Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.530142 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4pl\" (UniqueName: \"kubernetes.io/projected/366a0162-e23c-45f2-8c00-3718d0c8cfbf-kube-api-access-kt4pl\") pod \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\" (UID: \"366a0162-e23c-45f2-8c00-3718d0c8cfbf\") " Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.530779 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bfa4223-c0d1-4dbd-94cc-65f200123d19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bfa4223-c0d1-4dbd-94cc-65f200123d19" (UID: "1bfa4223-c0d1-4dbd-94cc-65f200123d19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.530944 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/366a0162-e23c-45f2-8c00-3718d0c8cfbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "366a0162-e23c-45f2-8c00-3718d0c8cfbf" (UID: "366a0162-e23c-45f2-8c00-3718d0c8cfbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.536011 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366a0162-e23c-45f2-8c00-3718d0c8cfbf-kube-api-access-kt4pl" (OuterVolumeSpecName: "kube-api-access-kt4pl") pod "366a0162-e23c-45f2-8c00-3718d0c8cfbf" (UID: "366a0162-e23c-45f2-8c00-3718d0c8cfbf"). InnerVolumeSpecName "kube-api-access-kt4pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.536735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfa4223-c0d1-4dbd-94cc-65f200123d19-kube-api-access-x6nzn" (OuterVolumeSpecName: "kube-api-access-x6nzn") pod "1bfa4223-c0d1-4dbd-94cc-65f200123d19" (UID: "1bfa4223-c0d1-4dbd-94cc-65f200123d19"). InnerVolumeSpecName "kube-api-access-x6nzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.632082 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfa4223-c0d1-4dbd-94cc-65f200123d19-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.632441 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366a0162-e23c-45f2-8c00-3718d0c8cfbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.632454 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6nzn\" (UniqueName: \"kubernetes.io/projected/1bfa4223-c0d1-4dbd-94cc-65f200123d19-kube-api-access-x6nzn\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:30 crc kubenswrapper[4895]: I1206 08:59:30.632515 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4pl\" (UniqueName: \"kubernetes.io/projected/366a0162-e23c-45f2-8c00-3718d0c8cfbf-kube-api-access-kt4pl\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:31 crc kubenswrapper[4895]: I1206 08:59:31.059143 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c77-account-create-update-tdrqp" Dec 06 08:59:31 crc kubenswrapper[4895]: I1206 08:59:31.059143 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c77-account-create-update-tdrqp" event={"ID":"366a0162-e23c-45f2-8c00-3718d0c8cfbf","Type":"ContainerDied","Data":"2bfae365fc3dd1d847e93ce299e9c22f4eb505238fc87c3adc88f2df874cd676"} Dec 06 08:59:31 crc kubenswrapper[4895]: I1206 08:59:31.059225 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bfae365fc3dd1d847e93ce299e9c22f4eb505238fc87c3adc88f2df874cd676" Dec 06 08:59:31 crc kubenswrapper[4895]: I1206 08:59:31.060896 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h9dcw" event={"ID":"1bfa4223-c0d1-4dbd-94cc-65f200123d19","Type":"ContainerDied","Data":"382baf366959c3830b0d05120e73d97a621cddc6dcdb156534e2ca7997dc5cf2"} Dec 06 08:59:31 crc kubenswrapper[4895]: I1206 08:59:31.060930 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="382baf366959c3830b0d05120e73d97a621cddc6dcdb156534e2ca7997dc5cf2" Dec 06 08:59:31 crc kubenswrapper[4895]: I1206 08:59:31.061002 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h9dcw" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.727884 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ttzbm"] Dec 06 08:59:32 crc kubenswrapper[4895]: E1206 08:59:32.728763 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfa4223-c0d1-4dbd-94cc-65f200123d19" containerName="mariadb-database-create" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.728786 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfa4223-c0d1-4dbd-94cc-65f200123d19" containerName="mariadb-database-create" Dec 06 08:59:32 crc kubenswrapper[4895]: E1206 08:59:32.728803 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366a0162-e23c-45f2-8c00-3718d0c8cfbf" containerName="mariadb-account-create-update" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.728811 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="366a0162-e23c-45f2-8c00-3718d0c8cfbf" containerName="mariadb-account-create-update" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.729050 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="366a0162-e23c-45f2-8c00-3718d0c8cfbf" containerName="mariadb-account-create-update" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.729068 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfa4223-c0d1-4dbd-94cc-65f200123d19" containerName="mariadb-database-create" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.729793 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.732552 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.732993 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcc6v" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.733098 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.733129 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.748595 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ttzbm"] Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.789774 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-config-data\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.789823 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-combined-ca-bundle\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.789892 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wf8\" (UniqueName: \"kubernetes.io/projected/4e8de9a5-f43e-4960-b492-679e9cb276f3-kube-api-access-x8wf8\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.890889 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8wf8\" (UniqueName: \"kubernetes.io/projected/4e8de9a5-f43e-4960-b492-679e9cb276f3-kube-api-access-x8wf8\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.891097 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-config-data\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.891138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-combined-ca-bundle\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.897220 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-config-data\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.897917 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-combined-ca-bundle\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:32 crc kubenswrapper[4895]: I1206 08:59:32.906527 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8wf8\" (UniqueName: \"kubernetes.io/projected/4e8de9a5-f43e-4960-b492-679e9cb276f3-kube-api-access-x8wf8\") pod \"keystone-db-sync-ttzbm\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:33 crc kubenswrapper[4895]: I1206 08:59:33.098940 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:33 crc kubenswrapper[4895]: I1206 08:59:33.560095 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ttzbm"] Dec 06 08:59:34 crc kubenswrapper[4895]: I1206 08:59:34.088656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttzbm" event={"ID":"4e8de9a5-f43e-4960-b492-679e9cb276f3","Type":"ContainerStarted","Data":"6f2e66084d7c388e533781d2e8fc391cd858b318579d10d2d8d9529d096aa506"} Dec 06 08:59:35 crc kubenswrapper[4895]: I1206 08:59:35.173265 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 06 08:59:37 crc kubenswrapper[4895]: I1206 08:59:37.052496 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:59:37 crc kubenswrapper[4895]: E1206 08:59:37.053550 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:59:39 crc kubenswrapper[4895]: I1206 08:59:39.131913 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttzbm" event={"ID":"4e8de9a5-f43e-4960-b492-679e9cb276f3","Type":"ContainerStarted","Data":"09f4663736ebfe30af3ccd63aafdffd5225ad5ec08673976104a3ef926132850"} Dec 06 08:59:39 crc kubenswrapper[4895]: I1206 08:59:39.158380 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ttzbm" podStartSLOduration=2.391625492 podStartE2EDuration="7.158362042s" podCreationTimestamp="2025-12-06 08:59:32 +0000 UTC" firstStartedPulling="2025-12-06 08:59:33.568990434 +0000 UTC m=+7335.970379304" lastFinishedPulling="2025-12-06 08:59:38.335726974 +0000 UTC m=+7340.737115854" observedRunningTime="2025-12-06 08:59:39.15418953 +0000 UTC m=+7341.555578400" watchObservedRunningTime="2025-12-06 08:59:39.158362042 +0000 UTC m=+7341.559750912" Dec 06 08:59:41 crc kubenswrapper[4895]: I1206 08:59:41.149940 4895 generic.go:334] "Generic (PLEG): container finished" podID="4e8de9a5-f43e-4960-b492-679e9cb276f3" containerID="09f4663736ebfe30af3ccd63aafdffd5225ad5ec08673976104a3ef926132850" exitCode=0 Dec 06 08:59:41 crc kubenswrapper[4895]: I1206 08:59:41.149986 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttzbm" event={"ID":"4e8de9a5-f43e-4960-b492-679e9cb276f3","Type":"ContainerDied","Data":"09f4663736ebfe30af3ccd63aafdffd5225ad5ec08673976104a3ef926132850"} Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.463879 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.667525 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-config-data\") pod \"4e8de9a5-f43e-4960-b492-679e9cb276f3\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.667649 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-combined-ca-bundle\") pod \"4e8de9a5-f43e-4960-b492-679e9cb276f3\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.667824 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8wf8\" (UniqueName: \"kubernetes.io/projected/4e8de9a5-f43e-4960-b492-679e9cb276f3-kube-api-access-x8wf8\") pod \"4e8de9a5-f43e-4960-b492-679e9cb276f3\" (UID: \"4e8de9a5-f43e-4960-b492-679e9cb276f3\") " Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.675070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e8de9a5-f43e-4960-b492-679e9cb276f3-kube-api-access-x8wf8" (OuterVolumeSpecName: "kube-api-access-x8wf8") pod "4e8de9a5-f43e-4960-b492-679e9cb276f3" (UID: "4e8de9a5-f43e-4960-b492-679e9cb276f3"). InnerVolumeSpecName "kube-api-access-x8wf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.706366 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e8de9a5-f43e-4960-b492-679e9cb276f3" (UID: "4e8de9a5-f43e-4960-b492-679e9cb276f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.711150 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-config-data" (OuterVolumeSpecName: "config-data") pod "4e8de9a5-f43e-4960-b492-679e9cb276f3" (UID: "4e8de9a5-f43e-4960-b492-679e9cb276f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.769304 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.769342 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8wf8\" (UniqueName: \"kubernetes.io/projected/4e8de9a5-f43e-4960-b492-679e9cb276f3-kube-api-access-x8wf8\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:42 crc kubenswrapper[4895]: I1206 08:59:42.769358 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8de9a5-f43e-4960-b492-679e9cb276f3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.168192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttzbm" event={"ID":"4e8de9a5-f43e-4960-b492-679e9cb276f3","Type":"ContainerDied","Data":"6f2e66084d7c388e533781d2e8fc391cd858b318579d10d2d8d9529d096aa506"} Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.168452 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2e66084d7c388e533781d2e8fc391cd858b318579d10d2d8d9529d096aa506" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.168254 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttzbm" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.739186 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77cd747fc9-878kn"] Dec 06 08:59:43 crc kubenswrapper[4895]: E1206 08:59:43.739504 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8de9a5-f43e-4960-b492-679e9cb276f3" containerName="keystone-db-sync" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.739516 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8de9a5-f43e-4960-b492-679e9cb276f3" containerName="keystone-db-sync" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.739667 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8de9a5-f43e-4960-b492-679e9cb276f3" containerName="keystone-db-sync" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.740445 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.759098 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qs2gf"] Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.760574 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.770634 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.771016 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.771497 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.775121 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.775779 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcc6v" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.806313 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cd747fc9-878kn"] Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.827278 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qs2gf"] Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.887938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-fernet-keys\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.887977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-combined-ca-bundle\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.888585 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-credential-keys\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.888618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-scripts\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.888634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-nb\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.888848 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-sb\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.888976 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjg8j\" (UniqueName: \"kubernetes.io/projected/d691404e-dab7-4610-af38-fc965d1b8d6e-kube-api-access-qjg8j\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.889029 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2m5g\" (UniqueName: \"kubernetes.io/projected/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-kube-api-access-z2m5g\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.889080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-dns-svc\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.889163 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-config-data\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.889191 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-config\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990259 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-sb\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990316 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjg8j\" (UniqueName: \"kubernetes.io/projected/d691404e-dab7-4610-af38-fc965d1b8d6e-kube-api-access-qjg8j\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990342 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2m5g\" (UniqueName: \"kubernetes.io/projected/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-kube-api-access-z2m5g\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990379 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-dns-svc\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-config-data\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990448 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-config\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990496 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-fernet-keys\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-combined-ca-bundle\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990533 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-credential-keys\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-scripts\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.990566 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-nb\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.991320 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-nb\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.991373 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-sb\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.992164 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-dns-svc\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.992311 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-config\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.994696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-config-data\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.995292 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-combined-ca-bundle\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.995321 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-scripts\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:43 crc kubenswrapper[4895]: I1206 08:59:43.995750 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-credential-keys\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.007651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-fernet-keys\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.011078 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjg8j\" (UniqueName: \"kubernetes.io/projected/d691404e-dab7-4610-af38-fc965d1b8d6e-kube-api-access-qjg8j\") pod \"keystone-bootstrap-qs2gf\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.014226 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2m5g\" (UniqueName: \"kubernetes.io/projected/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-kube-api-access-z2m5g\") pod \"dnsmasq-dns-77cd747fc9-878kn\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.055768 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.084862 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:44 crc kubenswrapper[4895]: W1206 08:59:44.520676 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a449e9f_1b28_4dfc_a8d1_71dc342c8ca9.slice/crio-abe0c0d54827ae87ad923ce331b6317dbf3890b8bb4ce838c1c9da9f0f6291ec WatchSource:0}: Error finding container abe0c0d54827ae87ad923ce331b6317dbf3890b8bb4ce838c1c9da9f0f6291ec: Status 404 returned error can't find the container with id abe0c0d54827ae87ad923ce331b6317dbf3890b8bb4ce838c1c9da9f0f6291ec Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.522178 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cd747fc9-878kn"] Dec 06 08:59:44 crc kubenswrapper[4895]: I1206 08:59:44.622075 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qs2gf"] Dec 06 08:59:45 crc kubenswrapper[4895]: I1206 08:59:45.196945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qs2gf" event={"ID":"d691404e-dab7-4610-af38-fc965d1b8d6e","Type":"ContainerStarted","Data":"5ee58780191a1eda3550d6ab2e7eb27e262674d8151ce6a2268f411d6930f311"} Dec 06 08:59:45 crc kubenswrapper[4895]: I1206 08:59:45.198422 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qs2gf" event={"ID":"d691404e-dab7-4610-af38-fc965d1b8d6e","Type":"ContainerStarted","Data":"9e15f99ff876e982d0d1b6a6af144b2610600a5713c1eee9a4aa533efd12e251"} Dec 06 08:59:45 crc kubenswrapper[4895]: I1206 08:59:45.201546 4895 generic.go:334] "Generic (PLEG): container finished" podID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerID="ed99bbdcffa8d7f2003a09cbd95eaa574a555e0dff76a4a5a8171ddb9c832b24" exitCode=0 Dec 06 08:59:45 crc kubenswrapper[4895]: I1206 08:59:45.201915 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" event={"ID":"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9","Type":"ContainerDied","Data":"ed99bbdcffa8d7f2003a09cbd95eaa574a555e0dff76a4a5a8171ddb9c832b24"} Dec 06 08:59:45 crc kubenswrapper[4895]: I1206 08:59:45.202126 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" event={"ID":"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9","Type":"ContainerStarted","Data":"abe0c0d54827ae87ad923ce331b6317dbf3890b8bb4ce838c1c9da9f0f6291ec"} Dec 06 08:59:45 crc kubenswrapper[4895]: I1206 08:59:45.239817 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qs2gf" podStartSLOduration=2.239798825 podStartE2EDuration="2.239798825s" podCreationTimestamp="2025-12-06 08:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:59:45.230628549 +0000 UTC m=+7347.632017429" watchObservedRunningTime="2025-12-06 08:59:45.239798825 +0000 UTC m=+7347.641187705" Dec 06 08:59:46 crc kubenswrapper[4895]: I1206 08:59:46.216911 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" event={"ID":"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9","Type":"ContainerStarted","Data":"5598e8c14406d5800dac949b7c951dc7501cfcad880a0093ba137042ec9b5dbc"} Dec 06 08:59:46 crc kubenswrapper[4895]: I1206 08:59:46.218781 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:46 crc kubenswrapper[4895]: I1206 08:59:46.247021 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" podStartSLOduration=3.247006453 podStartE2EDuration="3.247006453s" podCreationTimestamp="2025-12-06 08:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:59:46.245713618 +0000 UTC m=+7348.647102498" watchObservedRunningTime="2025-12-06 08:59:46.247006453 +0000 UTC m=+7348.648395323" Dec 06 08:59:48 crc kubenswrapper[4895]: I1206 08:59:48.240712 4895 generic.go:334] "Generic (PLEG): container finished" podID="d691404e-dab7-4610-af38-fc965d1b8d6e" containerID="5ee58780191a1eda3550d6ab2e7eb27e262674d8151ce6a2268f411d6930f311" exitCode=0 Dec 06 08:59:48 crc kubenswrapper[4895]: I1206 08:59:48.240864 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qs2gf" event={"ID":"d691404e-dab7-4610-af38-fc965d1b8d6e","Type":"ContainerDied","Data":"5ee58780191a1eda3550d6ab2e7eb27e262674d8151ce6a2268f411d6930f311"} Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.640690 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.803997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-combined-ca-bundle\") pod \"d691404e-dab7-4610-af38-fc965d1b8d6e\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.804052 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-fernet-keys\") pod \"d691404e-dab7-4610-af38-fc965d1b8d6e\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.804111 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjg8j\" (UniqueName: \"kubernetes.io/projected/d691404e-dab7-4610-af38-fc965d1b8d6e-kube-api-access-qjg8j\") pod \"d691404e-dab7-4610-af38-fc965d1b8d6e\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.804143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-config-data\") pod \"d691404e-dab7-4610-af38-fc965d1b8d6e\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.804195 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-credential-keys\") pod \"d691404e-dab7-4610-af38-fc965d1b8d6e\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.804396 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-scripts\") pod \"d691404e-dab7-4610-af38-fc965d1b8d6e\" (UID: \"d691404e-dab7-4610-af38-fc965d1b8d6e\") " Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.810864 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d691404e-dab7-4610-af38-fc965d1b8d6e" (UID: "d691404e-dab7-4610-af38-fc965d1b8d6e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.812784 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-scripts" (OuterVolumeSpecName: "scripts") pod "d691404e-dab7-4610-af38-fc965d1b8d6e" (UID: "d691404e-dab7-4610-af38-fc965d1b8d6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.820731 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d691404e-dab7-4610-af38-fc965d1b8d6e-kube-api-access-qjg8j" (OuterVolumeSpecName: "kube-api-access-qjg8j") pod "d691404e-dab7-4610-af38-fc965d1b8d6e" (UID: "d691404e-dab7-4610-af38-fc965d1b8d6e"). InnerVolumeSpecName "kube-api-access-qjg8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.823573 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d691404e-dab7-4610-af38-fc965d1b8d6e" (UID: "d691404e-dab7-4610-af38-fc965d1b8d6e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.836308 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-config-data" (OuterVolumeSpecName: "config-data") pod "d691404e-dab7-4610-af38-fc965d1b8d6e" (UID: "d691404e-dab7-4610-af38-fc965d1b8d6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.853122 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d691404e-dab7-4610-af38-fc965d1b8d6e" (UID: "d691404e-dab7-4610-af38-fc965d1b8d6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.906746 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.906781 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.906792 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.906802 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjg8j\" (UniqueName: \"kubernetes.io/projected/d691404e-dab7-4610-af38-fc965d1b8d6e-kube-api-access-qjg8j\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.906811 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:49 crc kubenswrapper[4895]: I1206 08:59:49.906821 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d691404e-dab7-4610-af38-fc965d1b8d6e-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.051389 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 08:59:50 crc kubenswrapper[4895]: E1206 08:59:50.051810 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.260574 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qs2gf" event={"ID":"d691404e-dab7-4610-af38-fc965d1b8d6e","Type":"ContainerDied","Data":"9e15f99ff876e982d0d1b6a6af144b2610600a5713c1eee9a4aa533efd12e251"} Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.260964 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e15f99ff876e982d0d1b6a6af144b2610600a5713c1eee9a4aa533efd12e251" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.260650 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qs2gf" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.346778 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qs2gf"] Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.353215 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qs2gf"] Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.434327 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jj4z9"] Dec 06 08:59:50 crc kubenswrapper[4895]: E1206 08:59:50.434793 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d691404e-dab7-4610-af38-fc965d1b8d6e" containerName="keystone-bootstrap" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.434815 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d691404e-dab7-4610-af38-fc965d1b8d6e" containerName="keystone-bootstrap" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.435080 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d691404e-dab7-4610-af38-fc965d1b8d6e" containerName="keystone-bootstrap" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.435981 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.440423 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.440499 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.440893 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcc6v" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.440988 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.441209 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.445199 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jj4z9"] Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.617007 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-fernet-keys\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.617089 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jc25\" (UniqueName: \"kubernetes.io/projected/1eace17d-b195-4e67-bda4-f1a2c830b508-kube-api-access-2jc25\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.617314 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-credential-keys\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.617424 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-config-data\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.617509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-scripts\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.617563 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-combined-ca-bundle\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.719339 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-fernet-keys\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.719410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jc25\" (UniqueName: \"kubernetes.io/projected/1eace17d-b195-4e67-bda4-f1a2c830b508-kube-api-access-2jc25\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.719459 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-credential-keys\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.719519 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-config-data\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.719554 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-scripts\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.719584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-combined-ca-bundle\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.723803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-combined-ca-bundle\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.725352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-fernet-keys\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.725690 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-credential-keys\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.726568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-scripts\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.732213 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-config-data\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.740441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jc25\" (UniqueName: \"kubernetes.io/projected/1eace17d-b195-4e67-bda4-f1a2c830b508-kube-api-access-2jc25\") pod \"keystone-bootstrap-jj4z9\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:50 crc kubenswrapper[4895]: I1206 08:59:50.754703 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:51 crc kubenswrapper[4895]: I1206 08:59:51.251204 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jj4z9"] Dec 06 08:59:51 crc kubenswrapper[4895]: I1206 08:59:51.269285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4z9" event={"ID":"1eace17d-b195-4e67-bda4-f1a2c830b508","Type":"ContainerStarted","Data":"3b5263df5b2e247aacb3d5f974ed1348d27e2ba8e8f59f1beb59cee2930bc56a"} Dec 06 08:59:52 crc kubenswrapper[4895]: I1206 08:59:52.061051 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d691404e-dab7-4610-af38-fc965d1b8d6e" path="/var/lib/kubelet/pods/d691404e-dab7-4610-af38-fc965d1b8d6e/volumes" Dec 06 08:59:52 crc kubenswrapper[4895]: I1206 08:59:52.277315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4z9" event={"ID":"1eace17d-b195-4e67-bda4-f1a2c830b508","Type":"ContainerStarted","Data":"039500ee577b8f415dcc704c5b2c38c647fad33cabf125a59744529b6019f04a"} Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.063059 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.094119 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jj4z9" podStartSLOduration=4.094090991 podStartE2EDuration="4.094090991s" podCreationTimestamp="2025-12-06 08:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:59:52.300842375 +0000 UTC m=+7354.702231245" watchObservedRunningTime="2025-12-06 08:59:54.094090991 +0000 UTC m=+7356.495479881" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.119900 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668974d997-4cgm7"] Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.120327 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-668974d997-4cgm7" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerName="dnsmasq-dns" containerID="cri-o://b1a94569f61f961bda21a7e9cb91bbd211eead98247a5de1b350ee984abe7767" gracePeriod=10 Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.303338 4895 generic.go:334] "Generic (PLEG): container finished" podID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerID="b1a94569f61f961bda21a7e9cb91bbd211eead98247a5de1b350ee984abe7767" exitCode=0 Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.303778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668974d997-4cgm7" event={"ID":"8a2302c6-420d-434c-ba2c-bcd9c830c78b","Type":"ContainerDied","Data":"b1a94569f61f961bda21a7e9cb91bbd211eead98247a5de1b350ee984abe7767"} Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.643225 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.799935 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpcbs\" (UniqueName: \"kubernetes.io/projected/8a2302c6-420d-434c-ba2c-bcd9c830c78b-kube-api-access-qpcbs\") pod \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.800068 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-config\") pod \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.800208 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-sb\") pod \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.800299 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-dns-svc\") pod \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.800347 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-nb\") pod \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\" (UID: \"8a2302c6-420d-434c-ba2c-bcd9c830c78b\") " Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.808120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2302c6-420d-434c-ba2c-bcd9c830c78b-kube-api-access-qpcbs" (OuterVolumeSpecName: "kube-api-access-qpcbs") pod "8a2302c6-420d-434c-ba2c-bcd9c830c78b" (UID: "8a2302c6-420d-434c-ba2c-bcd9c830c78b"). InnerVolumeSpecName "kube-api-access-qpcbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.847320 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a2302c6-420d-434c-ba2c-bcd9c830c78b" (UID: "8a2302c6-420d-434c-ba2c-bcd9c830c78b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.847373 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-config" (OuterVolumeSpecName: "config") pod "8a2302c6-420d-434c-ba2c-bcd9c830c78b" (UID: "8a2302c6-420d-434c-ba2c-bcd9c830c78b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.848303 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a2302c6-420d-434c-ba2c-bcd9c830c78b" (UID: "8a2302c6-420d-434c-ba2c-bcd9c830c78b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.850350 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a2302c6-420d-434c-ba2c-bcd9c830c78b" (UID: "8a2302c6-420d-434c-ba2c-bcd9c830c78b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.903083 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpcbs\" (UniqueName: \"kubernetes.io/projected/8a2302c6-420d-434c-ba2c-bcd9c830c78b-kube-api-access-qpcbs\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.903138 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-config\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.903156 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.903167 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:54 crc kubenswrapper[4895]: I1206 08:59:54.903176 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2302c6-420d-434c-ba2c-bcd9c830c78b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.318300 4895 generic.go:334] "Generic (PLEG): container finished" podID="1eace17d-b195-4e67-bda4-f1a2c830b508" containerID="039500ee577b8f415dcc704c5b2c38c647fad33cabf125a59744529b6019f04a" exitCode=0 Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.318435 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4z9" event={"ID":"1eace17d-b195-4e67-bda4-f1a2c830b508","Type":"ContainerDied","Data":"039500ee577b8f415dcc704c5b2c38c647fad33cabf125a59744529b6019f04a"} Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.323048 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668974d997-4cgm7" event={"ID":"8a2302c6-420d-434c-ba2c-bcd9c830c78b","Type":"ContainerDied","Data":"8f456ef13ee452e484bee0886899e3bc6379701accf8f3f711777393c5b8211c"} Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.323115 4895 scope.go:117] "RemoveContainer" containerID="b1a94569f61f961bda21a7e9cb91bbd211eead98247a5de1b350ee984abe7767" Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.323252 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668974d997-4cgm7" Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.359683 4895 scope.go:117] "RemoveContainer" containerID="f24f94f041a73f817a1119b179b1c13c17c721971d8599788eb150134f4d03cd" Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.371553 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668974d997-4cgm7"] Dec 06 08:59:55 crc kubenswrapper[4895]: I1206 08:59:55.380197 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668974d997-4cgm7"] Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.069696 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" path="/var/lib/kubelet/pods/8a2302c6-420d-434c-ba2c-bcd9c830c78b/volumes" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.665937 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.833956 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-config-data\") pod \"1eace17d-b195-4e67-bda4-f1a2c830b508\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.834008 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-credential-keys\") pod \"1eace17d-b195-4e67-bda4-f1a2c830b508\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.834065 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-scripts\") pod \"1eace17d-b195-4e67-bda4-f1a2c830b508\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.834089 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-fernet-keys\") pod \"1eace17d-b195-4e67-bda4-f1a2c830b508\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.834155 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jc25\" (UniqueName: \"kubernetes.io/projected/1eace17d-b195-4e67-bda4-f1a2c830b508-kube-api-access-2jc25\") pod \"1eace17d-b195-4e67-bda4-f1a2c830b508\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.834289 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-combined-ca-bundle\") pod \"1eace17d-b195-4e67-bda4-f1a2c830b508\" (UID: \"1eace17d-b195-4e67-bda4-f1a2c830b508\") " Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.838544 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1eace17d-b195-4e67-bda4-f1a2c830b508" (UID: "1eace17d-b195-4e67-bda4-f1a2c830b508"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.838557 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1eace17d-b195-4e67-bda4-f1a2c830b508" (UID: "1eace17d-b195-4e67-bda4-f1a2c830b508"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.846621 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-scripts" (OuterVolumeSpecName: "scripts") pod "1eace17d-b195-4e67-bda4-f1a2c830b508" (UID: "1eace17d-b195-4e67-bda4-f1a2c830b508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.847388 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eace17d-b195-4e67-bda4-f1a2c830b508-kube-api-access-2jc25" (OuterVolumeSpecName: "kube-api-access-2jc25") pod "1eace17d-b195-4e67-bda4-f1a2c830b508" (UID: "1eace17d-b195-4e67-bda4-f1a2c830b508"). InnerVolumeSpecName "kube-api-access-2jc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.856701 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-config-data" (OuterVolumeSpecName: "config-data") pod "1eace17d-b195-4e67-bda4-f1a2c830b508" (UID: "1eace17d-b195-4e67-bda4-f1a2c830b508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.858282 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eace17d-b195-4e67-bda4-f1a2c830b508" (UID: "1eace17d-b195-4e67-bda4-f1a2c830b508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.935722 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.935755 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.935765 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.935773 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.935784 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jc25\" (UniqueName: \"kubernetes.io/projected/1eace17d-b195-4e67-bda4-f1a2c830b508-kube-api-access-2jc25\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:56 crc kubenswrapper[4895]: I1206 08:59:56.935793 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eace17d-b195-4e67-bda4-f1a2c830b508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.353237 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4z9" event={"ID":"1eace17d-b195-4e67-bda4-f1a2c830b508","Type":"ContainerDied","Data":"3b5263df5b2e247aacb3d5f974ed1348d27e2ba8e8f59f1beb59cee2930bc56a"} Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.353297 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5263df5b2e247aacb3d5f974ed1348d27e2ba8e8f59f1beb59cee2930bc56a" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.353337 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4z9" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.537399 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c7c85b544-hxwl4"] Dec 06 08:59:57 crc kubenswrapper[4895]: E1206 08:59:57.537830 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerName="dnsmasq-dns" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.537853 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerName="dnsmasq-dns" Dec 06 08:59:57 crc kubenswrapper[4895]: E1206 08:59:57.537889 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerName="init" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.537898 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerName="init" Dec 06 08:59:57 crc kubenswrapper[4895]: E1206 08:59:57.537912 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eace17d-b195-4e67-bda4-f1a2c830b508" containerName="keystone-bootstrap" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.537921 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eace17d-b195-4e67-bda4-f1a2c830b508" containerName="keystone-bootstrap" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.538102 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2302c6-420d-434c-ba2c-bcd9c830c78b" containerName="dnsmasq-dns" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.538128 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eace17d-b195-4e67-bda4-f1a2c830b508" containerName="keystone-bootstrap" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.538847 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.541957 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.541971 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.542179 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcc6v" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.542426 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.563795 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c7c85b544-hxwl4"] Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.649948 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-combined-ca-bundle\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.650042 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-config-data\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.650072 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-scripts\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.650098 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-fernet-keys\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.650123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-credential-keys\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.650177 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvqr4\" (UniqueName: \"kubernetes.io/projected/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-kube-api-access-wvqr4\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.752543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-combined-ca-bundle\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.752613 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-config-data\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.752641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-scripts\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.752664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-fernet-keys\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.752689 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-credential-keys\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.752740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvqr4\" (UniqueName: \"kubernetes.io/projected/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-kube-api-access-wvqr4\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.759071 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-config-data\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.759508 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-credential-keys\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.759961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-combined-ca-bundle\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.761780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-fernet-keys\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.761898 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-scripts\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.773424 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvqr4\" (UniqueName: \"kubernetes.io/projected/cd6690ad-c4bc-4076-9233-e2fbdc519ae1-kube-api-access-wvqr4\") pod \"keystone-7c7c85b544-hxwl4\" (UID: \"cd6690ad-c4bc-4076-9233-e2fbdc519ae1\") " pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:57 crc kubenswrapper[4895]: I1206 08:59:57.861181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:58 crc kubenswrapper[4895]: I1206 08:59:58.320850 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c7c85b544-hxwl4"] Dec 06 08:59:58 crc kubenswrapper[4895]: I1206 08:59:58.362642 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c7c85b544-hxwl4" event={"ID":"cd6690ad-c4bc-4076-9233-e2fbdc519ae1","Type":"ContainerStarted","Data":"2f1b6b81b3eb36b83b13c685515bdcb2e49ec4e9af244408f87306282e2e6d26"} Dec 06 08:59:59 crc kubenswrapper[4895]: I1206 08:59:59.381109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c7c85b544-hxwl4" event={"ID":"cd6690ad-c4bc-4076-9233-e2fbdc519ae1","Type":"ContainerStarted","Data":"50c8a4f16103ec181a3d6db89cba45aaf14bc4283c29d2926296549e2a6b0319"} Dec 06 08:59:59 crc kubenswrapper[4895]: I1206 08:59:59.381570 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 08:59:59 crc kubenswrapper[4895]: I1206 08:59:59.407847 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c7c85b544-hxwl4" podStartSLOduration=2.407772267 podStartE2EDuration="2.407772267s" podCreationTimestamp="2025-12-06 08:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:59:59.400589483 +0000 UTC m=+7361.801978353" watchObservedRunningTime="2025-12-06 08:59:59.407772267 +0000 UTC m=+7361.809161137" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.134499 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch"] Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.136811 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.139365 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.139568 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.154092 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch"] Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.294229 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwpd\" (UniqueName: \"kubernetes.io/projected/5d68b1c1-732c-476b-a7a2-44199c7d62b5-kube-api-access-hrwpd\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.294436 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d68b1c1-732c-476b-a7a2-44199c7d62b5-secret-volume\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.294744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d68b1c1-732c-476b-a7a2-44199c7d62b5-config-volume\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.396081 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d68b1c1-732c-476b-a7a2-44199c7d62b5-config-volume\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.396256 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwpd\" (UniqueName: \"kubernetes.io/projected/5d68b1c1-732c-476b-a7a2-44199c7d62b5-kube-api-access-hrwpd\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.397202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d68b1c1-732c-476b-a7a2-44199c7d62b5-config-volume\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.397201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d68b1c1-732c-476b-a7a2-44199c7d62b5-secret-volume\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.407010 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d68b1c1-732c-476b-a7a2-44199c7d62b5-secret-volume\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.414136 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwpd\" (UniqueName: \"kubernetes.io/projected/5d68b1c1-732c-476b-a7a2-44199c7d62b5-kube-api-access-hrwpd\") pod \"collect-profiles-29416860-x5fch\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.466698 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:00 crc kubenswrapper[4895]: I1206 09:00:00.879659 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch"] Dec 06 09:00:01 crc kubenswrapper[4895]: I1206 09:00:01.401534 4895 generic.go:334] "Generic (PLEG): container finished" podID="5d68b1c1-732c-476b-a7a2-44199c7d62b5" containerID="80c6ba5685fa31bfcfcad1270b79c54d569f907991bac6998b23e681034e871b" exitCode=0 Dec 06 09:00:01 crc kubenswrapper[4895]: I1206 09:00:01.401775 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" event={"ID":"5d68b1c1-732c-476b-a7a2-44199c7d62b5","Type":"ContainerDied","Data":"80c6ba5685fa31bfcfcad1270b79c54d569f907991bac6998b23e681034e871b"} Dec 06 09:00:01 crc kubenswrapper[4895]: I1206 09:00:01.401898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" event={"ID":"5d68b1c1-732c-476b-a7a2-44199c7d62b5","Type":"ContainerStarted","Data":"4363f29e59d347f1e5ccb52bf9ed8ffdd272e21623ba2fb77ef3c679f15a8af8"} Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.050690 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:00:02 crc kubenswrapper[4895]: E1206 09:00:02.050998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.747246 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.838556 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d68b1c1-732c-476b-a7a2-44199c7d62b5-secret-volume\") pod \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.838765 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d68b1c1-732c-476b-a7a2-44199c7d62b5-config-volume\") pod \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.838867 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrwpd\" (UniqueName: \"kubernetes.io/projected/5d68b1c1-732c-476b-a7a2-44199c7d62b5-kube-api-access-hrwpd\") pod \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\" (UID: \"5d68b1c1-732c-476b-a7a2-44199c7d62b5\") " Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.839608 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d68b1c1-732c-476b-a7a2-44199c7d62b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d68b1c1-732c-476b-a7a2-44199c7d62b5" (UID: "5d68b1c1-732c-476b-a7a2-44199c7d62b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.844600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d68b1c1-732c-476b-a7a2-44199c7d62b5-kube-api-access-hrwpd" (OuterVolumeSpecName: "kube-api-access-hrwpd") pod "5d68b1c1-732c-476b-a7a2-44199c7d62b5" (UID: "5d68b1c1-732c-476b-a7a2-44199c7d62b5"). InnerVolumeSpecName "kube-api-access-hrwpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.847720 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b1c1-732c-476b-a7a2-44199c7d62b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d68b1c1-732c-476b-a7a2-44199c7d62b5" (UID: "5d68b1c1-732c-476b-a7a2-44199c7d62b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.941166 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d68b1c1-732c-476b-a7a2-44199c7d62b5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.941433 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrwpd\" (UniqueName: \"kubernetes.io/projected/5d68b1c1-732c-476b-a7a2-44199c7d62b5-kube-api-access-hrwpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:00:02 crc kubenswrapper[4895]: I1206 09:00:02.941532 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d68b1c1-732c-476b-a7a2-44199c7d62b5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:00:03 crc kubenswrapper[4895]: I1206 09:00:03.425140 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" event={"ID":"5d68b1c1-732c-476b-a7a2-44199c7d62b5","Type":"ContainerDied","Data":"4363f29e59d347f1e5ccb52bf9ed8ffdd272e21623ba2fb77ef3c679f15a8af8"} Dec 06 09:00:03 crc kubenswrapper[4895]: I1206 09:00:03.425527 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4363f29e59d347f1e5ccb52bf9ed8ffdd272e21623ba2fb77ef3c679f15a8af8" Dec 06 09:00:03 crc kubenswrapper[4895]: I1206 09:00:03.425199 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch" Dec 06 09:00:03 crc kubenswrapper[4895]: I1206 09:00:03.810385 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8"] Dec 06 09:00:03 crc kubenswrapper[4895]: I1206 09:00:03.815842 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416815-z46r8"] Dec 06 09:00:04 crc kubenswrapper[4895]: I1206 09:00:04.063565 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a9e240-d36a-40a1-8e3e-4995653a3015" path="/var/lib/kubelet/pods/71a9e240-d36a-40a1-8e3e-4995653a3015/volumes" Dec 06 09:00:15 crc kubenswrapper[4895]: I1206 09:00:15.007658 4895 scope.go:117] "RemoveContainer" containerID="0faaa7ac2cde97f4ef154f67f4b0120e572cffa407a4a2b061f9cbe70166db76" Dec 06 09:00:16 crc kubenswrapper[4895]: I1206 09:00:16.050869 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:00:16 crc kubenswrapper[4895]: E1206 09:00:16.051372 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:00:28 crc kubenswrapper[4895]: I1206 09:00:28.056851 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:00:28 crc kubenswrapper[4895]: E1206 09:00:28.057681 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:00:29 crc kubenswrapper[4895]: I1206 09:00:29.318008 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c7c85b544-hxwl4" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.187729 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 09:00:32 crc kubenswrapper[4895]: E1206 09:00:32.188598 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d68b1c1-732c-476b-a7a2-44199c7d62b5" containerName="collect-profiles" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.188617 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d68b1c1-732c-476b-a7a2-44199c7d62b5" containerName="collect-profiles" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.188899 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d68b1c1-732c-476b-a7a2-44199c7d62b5" containerName="collect-profiles" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.189757 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.192467 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.192629 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4zbcz" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.193307 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.218878 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.349129 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c202be77-0d5f-48a3-82dd-119617062782-openstack-config\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.349533 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c202be77-0d5f-48a3-82dd-119617062782-openstack-config-secret\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.349586 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4vb\" (UniqueName: \"kubernetes.io/projected/c202be77-0d5f-48a3-82dd-119617062782-kube-api-access-gc4vb\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.451107 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4vb\" (UniqueName: \"kubernetes.io/projected/c202be77-0d5f-48a3-82dd-119617062782-kube-api-access-gc4vb\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.451263 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c202be77-0d5f-48a3-82dd-119617062782-openstack-config\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.451312 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c202be77-0d5f-48a3-82dd-119617062782-openstack-config-secret\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.452072 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c202be77-0d5f-48a3-82dd-119617062782-openstack-config\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.458701 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c202be77-0d5f-48a3-82dd-119617062782-openstack-config-secret\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.467102 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4vb\" (UniqueName: \"kubernetes.io/projected/c202be77-0d5f-48a3-82dd-119617062782-kube-api-access-gc4vb\") pod \"openstackclient\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.518155 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 09:00:32 crc kubenswrapper[4895]: I1206 09:00:32.978566 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 09:00:32 crc kubenswrapper[4895]: W1206 09:00:32.982548 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc202be77_0d5f_48a3_82dd_119617062782.slice/crio-2d1058d147fca341905a13512642f0e8ccec2251330464b5381bcd8488b99e86 WatchSource:0}: Error finding container 2d1058d147fca341905a13512642f0e8ccec2251330464b5381bcd8488b99e86: Status 404 returned error can't find the container with id 2d1058d147fca341905a13512642f0e8ccec2251330464b5381bcd8488b99e86 Dec 06 09:00:33 crc kubenswrapper[4895]: I1206 09:00:33.689627 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c202be77-0d5f-48a3-82dd-119617062782","Type":"ContainerStarted","Data":"2d1058d147fca341905a13512642f0e8ccec2251330464b5381bcd8488b99e86"} Dec 06 09:00:42 crc kubenswrapper[4895]: I1206 09:00:42.051037 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:00:42 crc kubenswrapper[4895]: E1206 09:00:42.051739 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:00:44 crc kubenswrapper[4895]: I1206 09:00:44.789852 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c202be77-0d5f-48a3-82dd-119617062782","Type":"ContainerStarted","Data":"279409f0fb76568c28cfc02adc9cd75122a69f1847a444b00a49e1bcadb31e66"} Dec 06 09:00:44 crc kubenswrapper[4895]: I1206 09:00:44.820252 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.051252365 podStartE2EDuration="12.820234233s" podCreationTimestamp="2025-12-06 09:00:32 +0000 UTC" firstStartedPulling="2025-12-06 09:00:32.985358592 +0000 UTC m=+7395.386747462" lastFinishedPulling="2025-12-06 09:00:43.75434046 +0000 UTC m=+7406.155729330" observedRunningTime="2025-12-06 09:00:44.809839843 +0000 UTC m=+7407.211228713" watchObservedRunningTime="2025-12-06 09:00:44.820234233 +0000 UTC m=+7407.221623093" Dec 06 09:00:56 crc kubenswrapper[4895]: I1206 09:00:56.052038 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:00:56 crc kubenswrapper[4895]: E1206 09:00:56.053065 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.167450 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416861-gzlpw"] Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.169981 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.187453 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416861-gzlpw"] Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.361747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-fernet-keys\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.361859 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7t2\" (UniqueName: \"kubernetes.io/projected/6bb414b1-e3dc-4687-b46c-b484c476743b-kube-api-access-qc7t2\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.361918 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-combined-ca-bundle\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.361990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-config-data\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.463977 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-combined-ca-bundle\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.464062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-config-data\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.464158 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-fernet-keys\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.464215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7t2\" (UniqueName: \"kubernetes.io/projected/6bb414b1-e3dc-4687-b46c-b484c476743b-kube-api-access-qc7t2\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.473981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-combined-ca-bundle\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.478391 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-fernet-keys\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.482421 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7t2\" (UniqueName: \"kubernetes.io/projected/6bb414b1-e3dc-4687-b46c-b484c476743b-kube-api-access-qc7t2\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.483529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-config-data\") pod \"keystone-cron-29416861-gzlpw\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.515200 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:00 crc kubenswrapper[4895]: I1206 09:01:00.975262 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416861-gzlpw"] Dec 06 09:01:01 crc kubenswrapper[4895]: I1206 09:01:01.984912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416861-gzlpw" event={"ID":"6bb414b1-e3dc-4687-b46c-b484c476743b","Type":"ContainerStarted","Data":"0a48f000400d6a66b7e82481afd1aae10b028438af3c3ef7cab8f00d51f478fb"} Dec 06 09:01:01 crc kubenswrapper[4895]: I1206 09:01:01.985374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416861-gzlpw" event={"ID":"6bb414b1-e3dc-4687-b46c-b484c476743b","Type":"ContainerStarted","Data":"f299706018e0624688abfb083a58e637651b7855665ec9edeb8dd681f531615d"} Dec 06 09:01:02 crc kubenswrapper[4895]: I1206 09:01:02.006737 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416861-gzlpw" podStartSLOduration=2.006712544 podStartE2EDuration="2.006712544s" podCreationTimestamp="2025-12-06 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:01:01.999768557 +0000 UTC m=+7424.401157487" watchObservedRunningTime="2025-12-06 09:01:02.006712544 +0000 UTC m=+7424.408101414" Dec 06 09:01:04 crc kubenswrapper[4895]: I1206 09:01:04.011622 4895 generic.go:334] "Generic (PLEG): container finished" podID="6bb414b1-e3dc-4687-b46c-b484c476743b" containerID="0a48f000400d6a66b7e82481afd1aae10b028438af3c3ef7cab8f00d51f478fb" exitCode=0 Dec 06 09:01:04 crc kubenswrapper[4895]: I1206 09:01:04.011680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416861-gzlpw" event={"ID":"6bb414b1-e3dc-4687-b46c-b484c476743b","Type":"ContainerDied","Data":"0a48f000400d6a66b7e82481afd1aae10b028438af3c3ef7cab8f00d51f478fb"} Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.401410 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.552445 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-fernet-keys\") pod \"6bb414b1-e3dc-4687-b46c-b484c476743b\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.552580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-config-data\") pod \"6bb414b1-e3dc-4687-b46c-b484c476743b\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.552652 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7t2\" (UniqueName: \"kubernetes.io/projected/6bb414b1-e3dc-4687-b46c-b484c476743b-kube-api-access-qc7t2\") pod \"6bb414b1-e3dc-4687-b46c-b484c476743b\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.552686 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-combined-ca-bundle\") pod \"6bb414b1-e3dc-4687-b46c-b484c476743b\" (UID: \"6bb414b1-e3dc-4687-b46c-b484c476743b\") " Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.563422 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6bb414b1-e3dc-4687-b46c-b484c476743b" (UID: "6bb414b1-e3dc-4687-b46c-b484c476743b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.572838 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb414b1-e3dc-4687-b46c-b484c476743b-kube-api-access-qc7t2" (OuterVolumeSpecName: "kube-api-access-qc7t2") pod "6bb414b1-e3dc-4687-b46c-b484c476743b" (UID: "6bb414b1-e3dc-4687-b46c-b484c476743b"). InnerVolumeSpecName "kube-api-access-qc7t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.577750 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bb414b1-e3dc-4687-b46c-b484c476743b" (UID: "6bb414b1-e3dc-4687-b46c-b484c476743b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.616353 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-config-data" (OuterVolumeSpecName: "config-data") pod "6bb414b1-e3dc-4687-b46c-b484c476743b" (UID: "6bb414b1-e3dc-4687-b46c-b484c476743b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.654667 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.654707 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.654717 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc7t2\" (UniqueName: \"kubernetes.io/projected/6bb414b1-e3dc-4687-b46c-b484c476743b-kube-api-access-qc7t2\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:05 crc kubenswrapper[4895]: I1206 09:01:05.654728 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb414b1-e3dc-4687-b46c-b484c476743b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:06 crc kubenswrapper[4895]: I1206 09:01:06.032863 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416861-gzlpw" event={"ID":"6bb414b1-e3dc-4687-b46c-b484c476743b","Type":"ContainerDied","Data":"f299706018e0624688abfb083a58e637651b7855665ec9edeb8dd681f531615d"} Dec 06 09:01:06 crc kubenswrapper[4895]: I1206 09:01:06.032925 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f299706018e0624688abfb083a58e637651b7855665ec9edeb8dd681f531615d" Dec 06 09:01:06 crc kubenswrapper[4895]: I1206 09:01:06.033002 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416861-gzlpw" Dec 06 09:01:07 crc kubenswrapper[4895]: I1206 09:01:07.052758 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:01:07 crc kubenswrapper[4895]: E1206 09:01:07.053580 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:01:19 crc kubenswrapper[4895]: I1206 09:01:19.050412 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:01:19 crc kubenswrapper[4895]: E1206 09:01:19.051096 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.118746 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hvgvk"] Dec 06 09:01:22 crc kubenswrapper[4895]: E1206 09:01:22.119375 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb414b1-e3dc-4687-b46c-b484c476743b" containerName="keystone-cron" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.119390 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb414b1-e3dc-4687-b46c-b484c476743b" containerName="keystone-cron" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.119641 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb414b1-e3dc-4687-b46c-b484c476743b" containerName="keystone-cron" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.120945 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.147569 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvgvk"] Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.201384 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-utilities\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.201450 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdw5s\" (UniqueName: \"kubernetes.io/projected/504ba999-d88d-4a80-b843-ee72b562f313-kube-api-access-jdw5s\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.201654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-catalog-content\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.303172 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-catalog-content\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.303451 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-utilities\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.303573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdw5s\" (UniqueName: \"kubernetes.io/projected/504ba999-d88d-4a80-b843-ee72b562f313-kube-api-access-jdw5s\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.304053 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-catalog-content\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.304317 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-utilities\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.330383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdw5s\" (UniqueName: \"kubernetes.io/projected/504ba999-d88d-4a80-b843-ee72b562f313-kube-api-access-jdw5s\") pod \"certified-operators-hvgvk\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.452839 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:22 crc kubenswrapper[4895]: I1206 09:01:22.752227 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvgvk"] Dec 06 09:01:22 crc kubenswrapper[4895]: W1206 09:01:22.758328 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod504ba999_d88d_4a80_b843_ee72b562f313.slice/crio-e4aa7d2ac4ec756f9e35cd029ee3c93f3f092902ba69ab4e1ff0e57bfca934e8 WatchSource:0}: Error finding container e4aa7d2ac4ec756f9e35cd029ee3c93f3f092902ba69ab4e1ff0e57bfca934e8: Status 404 returned error can't find the container with id e4aa7d2ac4ec756f9e35cd029ee3c93f3f092902ba69ab4e1ff0e57bfca934e8 Dec 06 09:01:23 crc kubenswrapper[4895]: I1206 09:01:23.189759 4895 generic.go:334] "Generic (PLEG): container finished" podID="504ba999-d88d-4a80-b843-ee72b562f313" containerID="83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907" exitCode=0 Dec 06 09:01:23 crc kubenswrapper[4895]: I1206 09:01:23.189809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerDied","Data":"83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907"} Dec 06 09:01:23 crc kubenswrapper[4895]: I1206 09:01:23.189849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerStarted","Data":"e4aa7d2ac4ec756f9e35cd029ee3c93f3f092902ba69ab4e1ff0e57bfca934e8"} Dec 06 09:01:24 crc kubenswrapper[4895]: I1206 09:01:24.199589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerStarted","Data":"34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866"} Dec 06 09:01:25 crc kubenswrapper[4895]: I1206 09:01:25.209418 4895 generic.go:334] "Generic (PLEG): container finished" podID="504ba999-d88d-4a80-b843-ee72b562f313" containerID="34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866" exitCode=0 Dec 06 09:01:25 crc kubenswrapper[4895]: I1206 09:01:25.209506 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerDied","Data":"34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866"} Dec 06 09:01:26 crc kubenswrapper[4895]: I1206 09:01:26.221462 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerStarted","Data":"ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4"} Dec 06 09:01:26 crc kubenswrapper[4895]: I1206 09:01:26.246124 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hvgvk" podStartSLOduration=1.8333018490000001 podStartE2EDuration="4.246104188s" podCreationTimestamp="2025-12-06 09:01:22 +0000 UTC" firstStartedPulling="2025-12-06 09:01:23.191809107 +0000 UTC m=+7445.593197987" lastFinishedPulling="2025-12-06 09:01:25.604611446 +0000 UTC m=+7448.006000326" observedRunningTime="2025-12-06 09:01:26.240628131 +0000 UTC m=+7448.642017011" watchObservedRunningTime="2025-12-06 09:01:26.246104188 +0000 UTC m=+7448.647493058" Dec 06 09:01:30 crc kubenswrapper[4895]: I1206 09:01:30.050780 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:01:30 crc kubenswrapper[4895]: E1206 09:01:30.051392 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:01:32 crc kubenswrapper[4895]: I1206 09:01:32.453520 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:32 crc kubenswrapper[4895]: I1206 09:01:32.453882 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:32 crc kubenswrapper[4895]: I1206 09:01:32.524710 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:33 crc kubenswrapper[4895]: I1206 09:01:33.341959 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:33 crc kubenswrapper[4895]: I1206 09:01:33.410824 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvgvk"] Dec 06 09:01:35 crc kubenswrapper[4895]: I1206 09:01:35.307130 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hvgvk" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="registry-server" containerID="cri-o://ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4" gracePeriod=2 Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.291090 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.332611 4895 generic.go:334] "Generic (PLEG): container finished" podID="504ba999-d88d-4a80-b843-ee72b562f313" containerID="ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4" exitCode=0 Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.332693 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerDied","Data":"ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4"} Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.332753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvgvk" event={"ID":"504ba999-d88d-4a80-b843-ee72b562f313","Type":"ContainerDied","Data":"e4aa7d2ac4ec756f9e35cd029ee3c93f3f092902ba69ab4e1ff0e57bfca934e8"} Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.332788 4895 scope.go:117] "RemoveContainer" containerID="ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.333095 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvgvk" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.361153 4895 scope.go:117] "RemoveContainer" containerID="34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.389553 4895 scope.go:117] "RemoveContainer" containerID="83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.419653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-utilities\") pod \"504ba999-d88d-4a80-b843-ee72b562f313\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.419821 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdw5s\" (UniqueName: \"kubernetes.io/projected/504ba999-d88d-4a80-b843-ee72b562f313-kube-api-access-jdw5s\") pod \"504ba999-d88d-4a80-b843-ee72b562f313\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.419915 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-catalog-content\") pod \"504ba999-d88d-4a80-b843-ee72b562f313\" (UID: \"504ba999-d88d-4a80-b843-ee72b562f313\") " Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.430911 4895 scope.go:117] "RemoveContainer" containerID="ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.433166 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-utilities" (OuterVolumeSpecName: "utilities") pod "504ba999-d88d-4a80-b843-ee72b562f313" (UID: "504ba999-d88d-4a80-b843-ee72b562f313"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:01:36 crc kubenswrapper[4895]: E1206 09:01:36.433307 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4\": container with ID starting with ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4 not found: ID does not exist" containerID="ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.433355 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4"} err="failed to get container status \"ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4\": rpc error: code = NotFound desc = could not find container \"ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4\": container with ID starting with ceeb72da228c6b3c4d4b5a74de1ee77b44c0ca2bc6cd12600970a17cd9e4d6c4 not found: ID does not exist" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.433389 4895 scope.go:117] "RemoveContainer" containerID="34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866" Dec 06 09:01:36 crc kubenswrapper[4895]: E1206 09:01:36.434944 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866\": container with ID starting with 34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866 not found: ID does not exist" containerID="34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.435123 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866"} err="failed to get container status \"34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866\": rpc error: code = NotFound desc = could not find container \"34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866\": container with ID starting with 34c94c4582baeeb9b02144dd4723c3db7eaf1203265a25c694b5e5cd4b75b866 not found: ID does not exist" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.435284 4895 scope.go:117] "RemoveContainer" containerID="83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907" Dec 06 09:01:36 crc kubenswrapper[4895]: E1206 09:01:36.436638 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907\": container with ID starting with 83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907 not found: ID does not exist" containerID="83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.436683 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907"} err="failed to get container status \"83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907\": rpc error: code = NotFound desc = could not find container \"83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907\": container with ID starting with 83e58d67a0292f2375e35d0b18ed0d6223d2c93e25a4ac8e83fbbb16f75f8907 not found: ID does not exist" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.439172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504ba999-d88d-4a80-b843-ee72b562f313-kube-api-access-jdw5s" (OuterVolumeSpecName: "kube-api-access-jdw5s") pod "504ba999-d88d-4a80-b843-ee72b562f313" (UID: "504ba999-d88d-4a80-b843-ee72b562f313"). InnerVolumeSpecName "kube-api-access-jdw5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.483924 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "504ba999-d88d-4a80-b843-ee72b562f313" (UID: "504ba999-d88d-4a80-b843-ee72b562f313"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.521526 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdw5s\" (UniqueName: \"kubernetes.io/projected/504ba999-d88d-4a80-b843-ee72b562f313-kube-api-access-jdw5s\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.521555 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.521567 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504ba999-d88d-4a80-b843-ee72b562f313-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.696764 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvgvk"] Dec 06 09:01:36 crc kubenswrapper[4895]: I1206 09:01:36.704798 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hvgvk"] Dec 06 09:01:38 crc kubenswrapper[4895]: I1206 09:01:38.064852 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504ba999-d88d-4a80-b843-ee72b562f313" path="/var/lib/kubelet/pods/504ba999-d88d-4a80-b843-ee72b562f313/volumes" Dec 06 09:01:43 crc kubenswrapper[4895]: I1206 09:01:43.050694 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:01:43 crc kubenswrapper[4895]: E1206 09:01:43.051370 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:01:56 crc kubenswrapper[4895]: I1206 09:01:56.051196 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:01:56 crc kubenswrapper[4895]: E1206 09:01:56.052089 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.297049 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0bd3-account-create-update-cngw7"] Dec 06 09:02:08 crc kubenswrapper[4895]: E1206 09:02:08.298667 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="registry-server" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.298747 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="registry-server" Dec 06 09:02:08 crc kubenswrapper[4895]: E1206 09:02:08.298825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="extract-content" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.298883 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="extract-content" Dec 06 09:02:08 crc kubenswrapper[4895]: E1206 09:02:08.298950 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="extract-utilities" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.299033 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="extract-utilities" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.299248 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="504ba999-d88d-4a80-b843-ee72b562f313" containerName="registry-server" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.299937 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.302008 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.308121 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sqzjf"] Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.309382 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.316555 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0bd3-account-create-update-cngw7"] Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.324738 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sqzjf"] Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.359439 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaadb1f-b708-4d09-8b99-d5ca878579ce-operator-scripts\") pod \"barbican-0bd3-account-create-update-cngw7\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.359715 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzjf\" (UniqueName: \"kubernetes.io/projected/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-kube-api-access-vxzjf\") pod \"barbican-db-create-sqzjf\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.359755 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-operator-scripts\") pod \"barbican-db-create-sqzjf\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.359786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frk7n\" (UniqueName: \"kubernetes.io/projected/9eaadb1f-b708-4d09-8b99-d5ca878579ce-kube-api-access-frk7n\") pod \"barbican-0bd3-account-create-update-cngw7\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.461362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzjf\" (UniqueName: \"kubernetes.io/projected/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-kube-api-access-vxzjf\") pod \"barbican-db-create-sqzjf\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.461752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-operator-scripts\") pod \"barbican-db-create-sqzjf\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.462552 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frk7n\" (UniqueName: \"kubernetes.io/projected/9eaadb1f-b708-4d09-8b99-d5ca878579ce-kube-api-access-frk7n\") pod \"barbican-0bd3-account-create-update-cngw7\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.462935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaadb1f-b708-4d09-8b99-d5ca878579ce-operator-scripts\") pod \"barbican-0bd3-account-create-update-cngw7\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.462502 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-operator-scripts\") pod \"barbican-db-create-sqzjf\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.463594 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaadb1f-b708-4d09-8b99-d5ca878579ce-operator-scripts\") pod \"barbican-0bd3-account-create-update-cngw7\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.480253 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzjf\" (UniqueName: \"kubernetes.io/projected/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-kube-api-access-vxzjf\") pod \"barbican-db-create-sqzjf\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.480381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frk7n\" (UniqueName: \"kubernetes.io/projected/9eaadb1f-b708-4d09-8b99-d5ca878579ce-kube-api-access-frk7n\") pod \"barbican-0bd3-account-create-update-cngw7\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.623060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:08 crc kubenswrapper[4895]: I1206 09:02:08.635649 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.050762 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:02:09 crc kubenswrapper[4895]: E1206 09:02:09.051536 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.139881 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sqzjf"] Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.156410 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0bd3-account-create-update-cngw7"] Dec 06 09:02:09 crc kubenswrapper[4895]: W1206 09:02:09.186889 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eaadb1f_b708_4d09_8b99_d5ca878579ce.slice/crio-0d15467619a6063875a175d94e32e5466d21254d5c79cac99578b727012ae083 WatchSource:0}: Error finding container 0d15467619a6063875a175d94e32e5466d21254d5c79cac99578b727012ae083: Status 404 returned error can't find the container with id 0d15467619a6063875a175d94e32e5466d21254d5c79cac99578b727012ae083 Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.748130 4895 generic.go:334] "Generic (PLEG): container finished" podID="5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" containerID="c0ddd84e7a74098af020d1c96a4557d49c9f216d84d3fdaa132edcef2494a512" exitCode=0 Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.748249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sqzjf" event={"ID":"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e","Type":"ContainerDied","Data":"c0ddd84e7a74098af020d1c96a4557d49c9f216d84d3fdaa132edcef2494a512"} Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.748326 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sqzjf" event={"ID":"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e","Type":"ContainerStarted","Data":"d319e3f5ab42fec64fb7a751f9e51bd4644489ab7d2695b585f19850e10ac08a"} Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.750845 4895 generic.go:334] "Generic (PLEG): container finished" podID="9eaadb1f-b708-4d09-8b99-d5ca878579ce" containerID="72b57c8d198d858ed107418aa3c02ac1fc85c8871c55781a74c0e7be87dab263" exitCode=0 Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.750884 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0bd3-account-create-update-cngw7" event={"ID":"9eaadb1f-b708-4d09-8b99-d5ca878579ce","Type":"ContainerDied","Data":"72b57c8d198d858ed107418aa3c02ac1fc85c8871c55781a74c0e7be87dab263"} Dec 06 09:02:09 crc kubenswrapper[4895]: I1206 09:02:09.750914 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0bd3-account-create-update-cngw7" event={"ID":"9eaadb1f-b708-4d09-8b99-d5ca878579ce","Type":"ContainerStarted","Data":"0d15467619a6063875a175d94e32e5466d21254d5c79cac99578b727012ae083"} Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.171609 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.178110 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.215577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frk7n\" (UniqueName: \"kubernetes.io/projected/9eaadb1f-b708-4d09-8b99-d5ca878579ce-kube-api-access-frk7n\") pod \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.215658 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-operator-scripts\") pod \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.215978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaadb1f-b708-4d09-8b99-d5ca878579ce-operator-scripts\") pod \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\" (UID: \"9eaadb1f-b708-4d09-8b99-d5ca878579ce\") " Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.216072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzjf\" (UniqueName: \"kubernetes.io/projected/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-kube-api-access-vxzjf\") pod \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\" (UID: \"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e\") " Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.216818 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" (UID: "5060d9e6-9916-4fe2-a34d-63c9c4e21f9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.217339 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaadb1f-b708-4d09-8b99-d5ca878579ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eaadb1f-b708-4d09-8b99-d5ca878579ce" (UID: "9eaadb1f-b708-4d09-8b99-d5ca878579ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.222793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-kube-api-access-vxzjf" (OuterVolumeSpecName: "kube-api-access-vxzjf") pod "5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" (UID: "5060d9e6-9916-4fe2-a34d-63c9c4e21f9e"). InnerVolumeSpecName "kube-api-access-vxzjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.236359 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaadb1f-b708-4d09-8b99-d5ca878579ce-kube-api-access-frk7n" (OuterVolumeSpecName: "kube-api-access-frk7n") pod "9eaadb1f-b708-4d09-8b99-d5ca878579ce" (UID: "9eaadb1f-b708-4d09-8b99-d5ca878579ce"). InnerVolumeSpecName "kube-api-access-frk7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.317959 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaadb1f-b708-4d09-8b99-d5ca878579ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.317994 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzjf\" (UniqueName: \"kubernetes.io/projected/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-kube-api-access-vxzjf\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.318008 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frk7n\" (UniqueName: \"kubernetes.io/projected/9eaadb1f-b708-4d09-8b99-d5ca878579ce-kube-api-access-frk7n\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.318019 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.779157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0bd3-account-create-update-cngw7" event={"ID":"9eaadb1f-b708-4d09-8b99-d5ca878579ce","Type":"ContainerDied","Data":"0d15467619a6063875a175d94e32e5466d21254d5c79cac99578b727012ae083"} Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.779619 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d15467619a6063875a175d94e32e5466d21254d5c79cac99578b727012ae083" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.779220 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0bd3-account-create-update-cngw7" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.781703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sqzjf" event={"ID":"5060d9e6-9916-4fe2-a34d-63c9c4e21f9e","Type":"ContainerDied","Data":"d319e3f5ab42fec64fb7a751f9e51bd4644489ab7d2695b585f19850e10ac08a"} Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.781779 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d319e3f5ab42fec64fb7a751f9e51bd4644489ab7d2695b585f19850e10ac08a" Dec 06 09:02:11 crc kubenswrapper[4895]: I1206 09:02:11.781888 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sqzjf" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.529586 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h7rnz"] Dec 06 09:02:13 crc kubenswrapper[4895]: E1206 09:02:13.529968 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaadb1f-b708-4d09-8b99-d5ca878579ce" containerName="mariadb-account-create-update" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.529985 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaadb1f-b708-4d09-8b99-d5ca878579ce" containerName="mariadb-account-create-update" Dec 06 09:02:13 crc kubenswrapper[4895]: E1206 09:02:13.530011 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" containerName="mariadb-database-create" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.530019 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" containerName="mariadb-database-create" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.530261 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" containerName="mariadb-database-create" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.530281 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eaadb1f-b708-4d09-8b99-d5ca878579ce" containerName="mariadb-account-create-update" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.531006 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.532779 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-chk42" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.533260 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.553887 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h7rnz"] Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.556838 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-combined-ca-bundle\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.556892 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-db-sync-config-data\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.557014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fwr\" (UniqueName: \"kubernetes.io/projected/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-kube-api-access-q8fwr\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.658086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-combined-ca-bundle\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.658127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-db-sync-config-data\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.658184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8fwr\" (UniqueName: \"kubernetes.io/projected/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-kube-api-access-q8fwr\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.663089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-combined-ca-bundle\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.663142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-db-sync-config-data\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.676367 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8fwr\" (UniqueName: \"kubernetes.io/projected/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-kube-api-access-q8fwr\") pod \"barbican-db-sync-h7rnz\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:13 crc kubenswrapper[4895]: I1206 09:02:13.848614 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:14 crc kubenswrapper[4895]: I1206 09:02:14.404803 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h7rnz"] Dec 06 09:02:14 crc kubenswrapper[4895]: I1206 09:02:14.810073 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7rnz" event={"ID":"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af","Type":"ContainerStarted","Data":"421168d374e57705fa3954af929b35e65e6a75f1fc6f95924c170fdd494a7f3e"} Dec 06 09:02:18 crc kubenswrapper[4895]: I1206 09:02:18.844456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7rnz" event={"ID":"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af","Type":"ContainerStarted","Data":"fa6d3d4bffd3fbcaf2e48c5b44064182a30cc9d8321b9a4458ab05b317f356ed"} Dec 06 09:02:18 crc kubenswrapper[4895]: I1206 09:02:18.866708 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h7rnz" podStartSLOduration=1.7153179490000001 podStartE2EDuration="5.866677177s" podCreationTimestamp="2025-12-06 09:02:13 +0000 UTC" firstStartedPulling="2025-12-06 09:02:14.408210914 +0000 UTC m=+7496.809599784" lastFinishedPulling="2025-12-06 09:02:18.559570122 +0000 UTC m=+7500.960959012" observedRunningTime="2025-12-06 09:02:18.863314636 +0000 UTC m=+7501.264703506" watchObservedRunningTime="2025-12-06 09:02:18.866677177 +0000 UTC m=+7501.268066057" Dec 06 09:02:20 crc kubenswrapper[4895]: I1206 09:02:20.861582 4895 generic.go:334] "Generic (PLEG): container finished" podID="3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" containerID="fa6d3d4bffd3fbcaf2e48c5b44064182a30cc9d8321b9a4458ab05b317f356ed" exitCode=0 Dec 06 09:02:20 crc kubenswrapper[4895]: I1206 09:02:20.861663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7rnz" event={"ID":"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af","Type":"ContainerDied","Data":"fa6d3d4bffd3fbcaf2e48c5b44064182a30cc9d8321b9a4458ab05b317f356ed"} Dec 06 09:02:21 crc kubenswrapper[4895]: I1206 09:02:21.051581 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:02:21 crc kubenswrapper[4895]: E1206 09:02:21.051844 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.218574 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.327625 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-combined-ca-bundle\") pod \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.327862 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-db-sync-config-data\") pod \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.327888 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fwr\" (UniqueName: \"kubernetes.io/projected/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-kube-api-access-q8fwr\") pod \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\" (UID: \"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af\") " Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.334550 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" (UID: "3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.334735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-kube-api-access-q8fwr" (OuterVolumeSpecName: "kube-api-access-q8fwr") pod "3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" (UID: "3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af"). InnerVolumeSpecName "kube-api-access-q8fwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.360349 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" (UID: "3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.429436 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.429495 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8fwr\" (UniqueName: \"kubernetes.io/projected/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-kube-api-access-q8fwr\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.429507 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.877670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7rnz" event={"ID":"3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af","Type":"ContainerDied","Data":"421168d374e57705fa3954af929b35e65e6a75f1fc6f95924c170fdd494a7f3e"} Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.877990 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421168d374e57705fa3954af929b35e65e6a75f1fc6f95924c170fdd494a7f3e" Dec 06 09:02:22 crc kubenswrapper[4895]: I1206 09:02:22.877734 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7rnz" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.142399 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-55f6676c94-q78bk"] Dec 06 09:02:23 crc kubenswrapper[4895]: E1206 09:02:23.142788 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" containerName="barbican-db-sync" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.142810 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" containerName="barbican-db-sync" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.143016 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" containerName="barbican-db-sync" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.144089 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.148316 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.148639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-chk42" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.149430 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.160688 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55f6676c94-q78bk"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.194239 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5df995578f-vqnnt"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.195703 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.199033 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.226023 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5df995578f-vqnnt"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.241810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxrr\" (UniqueName: \"kubernetes.io/projected/fbb03056-067e-4d12-9ea3-3133d9ac3220-kube-api-access-2lxrr\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.241863 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-config-data-custom\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.241891 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-config-data\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.241916 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-combined-ca-bundle\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.241950 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb03056-067e-4d12-9ea3-3133d9ac3220-logs\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.299989 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84687cd4ff-2xn2c"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.315691 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.331042 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84687cd4ff-2xn2c"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358595 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxrr\" (UniqueName: \"kubernetes.io/projected/fbb03056-067e-4d12-9ea3-3133d9ac3220-kube-api-access-2lxrr\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358681 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-config-data-custom\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358725 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-nb\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-config-data\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358805 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-config-data\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358832 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-combined-ca-bundle\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358857 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n72b\" (UniqueName: \"kubernetes.io/projected/5e1dd916-647a-488e-91e4-1cae7d9a12e8-kube-api-access-9n72b\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358877 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-sb\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358906 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-config-data-custom\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.358962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb03056-067e-4d12-9ea3-3133d9ac3220-logs\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.359004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ghq\" (UniqueName: \"kubernetes.io/projected/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-kube-api-access-44ghq\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.359043 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-combined-ca-bundle\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.359099 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.359138 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-logs\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.359169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-dns-svc\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.364709 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb03056-067e-4d12-9ea3-3133d9ac3220-logs\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.365645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-config-data\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.368947 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-config-data-custom\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.375164 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxrr\" (UniqueName: \"kubernetes.io/projected/fbb03056-067e-4d12-9ea3-3133d9ac3220-kube-api-access-2lxrr\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.375292 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb03056-067e-4d12-9ea3-3133d9ac3220-combined-ca-bundle\") pod \"barbican-keystone-listener-55f6676c94-q78bk\" (UID: \"fbb03056-067e-4d12-9ea3-3133d9ac3220\") " pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.431645 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c84565b98-jp5wp"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.433141 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.440584 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.454913 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c84565b98-jp5wp"] Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-nb\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-config-data\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n72b\" (UniqueName: \"kubernetes.io/projected/5e1dd916-647a-488e-91e4-1cae7d9a12e8-kube-api-access-9n72b\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-sb\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461199 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-config-data-custom\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461250 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44ghq\" (UniqueName: \"kubernetes.io/projected/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-kube-api-access-44ghq\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461290 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-combined-ca-bundle\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461332 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461371 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-logs\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.461401 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-dns-svc\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.462120 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-nb\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.462659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-dns-svc\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.463318 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.463648 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-logs\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.466719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-config-data-custom\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.467368 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.469180 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-sb\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.483707 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-config-data\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.488440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n72b\" (UniqueName: \"kubernetes.io/projected/5e1dd916-647a-488e-91e4-1cae7d9a12e8-kube-api-access-9n72b\") pod \"dnsmasq-dns-84687cd4ff-2xn2c\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.489250 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44ghq\" (UniqueName: \"kubernetes.io/projected/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-kube-api-access-44ghq\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.492891 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9e34be-d2b3-4321-adc5-77ed0d2acfad-combined-ca-bundle\") pod \"barbican-worker-5df995578f-vqnnt\" (UID: \"8d9e34be-d2b3-4321-adc5-77ed0d2acfad\") " pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.516093 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5df995578f-vqnnt" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.568757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-config-data\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.569092 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-config-data-custom\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.569211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqwc\" (UniqueName: \"kubernetes.io/projected/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-kube-api-access-gkqwc\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.569356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-logs\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.569530 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-combined-ca-bundle\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.670660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-logs\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.671005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-combined-ca-bundle\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.671070 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-config-data\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.671104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-config-data-custom\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.671127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkqwc\" (UniqueName: \"kubernetes.io/projected/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-kube-api-access-gkqwc\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.671586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-logs\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.675924 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-combined-ca-bundle\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.676445 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-config-data\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.678984 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-config-data-custom\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.692611 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkqwc\" (UniqueName: \"kubernetes.io/projected/71304fef-b0a5-465d-9a9b-8eb00d6c0f02-kube-api-access-gkqwc\") pod \"barbican-api-7c84565b98-jp5wp\" (UID: \"71304fef-b0a5-465d-9a9b-8eb00d6c0f02\") " pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.741418 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:23 crc kubenswrapper[4895]: I1206 09:02:23.909353 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.014100 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55f6676c94-q78bk"] Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.049885 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.074589 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5df995578f-vqnnt"] Dec 06 09:02:24 crc kubenswrapper[4895]: W1206 09:02:24.082609 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d9e34be_d2b3_4321_adc5_77ed0d2acfad.slice/crio-9f488e5509ef0376a06ca0d3acf087e759e2b4f1b736a218056238fa17b5420b WatchSource:0}: Error finding container 9f488e5509ef0376a06ca0d3acf087e759e2b4f1b736a218056238fa17b5420b: Status 404 returned error can't find the container with id 9f488e5509ef0376a06ca0d3acf087e759e2b4f1b736a218056238fa17b5420b Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.219012 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84687cd4ff-2xn2c"] Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.478261 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c84565b98-jp5wp"] Dec 06 09:02:24 crc kubenswrapper[4895]: W1206 09:02:24.502692 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71304fef_b0a5_465d_9a9b_8eb00d6c0f02.slice/crio-b1b1ab64c0bdfab8258305ff5b717dbc3d51c2b4fe3082e38ce32004f2d8d925 WatchSource:0}: Error finding container b1b1ab64c0bdfab8258305ff5b717dbc3d51c2b4fe3082e38ce32004f2d8d925: Status 404 returned error can't find the container with id b1b1ab64c0bdfab8258305ff5b717dbc3d51c2b4fe3082e38ce32004f2d8d925 Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.902264 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c84565b98-jp5wp" event={"ID":"71304fef-b0a5-465d-9a9b-8eb00d6c0f02","Type":"ContainerStarted","Data":"2f1cd1ef25c91cc5cbdb2892e97632a77c3965465e77dad1fdcddce315cbd66f"} Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.902320 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c84565b98-jp5wp" event={"ID":"71304fef-b0a5-465d-9a9b-8eb00d6c0f02","Type":"ContainerStarted","Data":"b1b1ab64c0bdfab8258305ff5b717dbc3d51c2b4fe3082e38ce32004f2d8d925"} Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.905045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" event={"ID":"fbb03056-067e-4d12-9ea3-3133d9ac3220","Type":"ContainerStarted","Data":"df86c23141f5185501fb6ffde5ce1e88ab08b399104111b22d589285f09a682e"} Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.907813 4895 generic.go:334] "Generic (PLEG): container finished" podID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerID="21a762a97f4c48f9d9de37be99734d8a2cca79d2089a77057de321049cf1ba64" exitCode=0 Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.907899 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" event={"ID":"5e1dd916-647a-488e-91e4-1cae7d9a12e8","Type":"ContainerDied","Data":"21a762a97f4c48f9d9de37be99734d8a2cca79d2089a77057de321049cf1ba64"} Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.907963 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" event={"ID":"5e1dd916-647a-488e-91e4-1cae7d9a12e8","Type":"ContainerStarted","Data":"b694c93b077b07b62f86a597c9dc43c159e3ad30c2b63b42a098475629df3af8"} Dec 06 09:02:24 crc kubenswrapper[4895]: I1206 09:02:24.911637 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5df995578f-vqnnt" event={"ID":"8d9e34be-d2b3-4321-adc5-77ed0d2acfad","Type":"ContainerStarted","Data":"9f488e5509ef0376a06ca0d3acf087e759e2b4f1b736a218056238fa17b5420b"} Dec 06 09:02:25 crc kubenswrapper[4895]: I1206 09:02:25.923751 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" event={"ID":"5e1dd916-647a-488e-91e4-1cae7d9a12e8","Type":"ContainerStarted","Data":"8cfb19f9f40d09efa8bff5f801e8893e40f548f6fe3935e0ba288f65b9627738"} Dec 06 09:02:25 crc kubenswrapper[4895]: I1206 09:02:25.924556 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:25 crc kubenswrapper[4895]: I1206 09:02:25.926037 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c84565b98-jp5wp" event={"ID":"71304fef-b0a5-465d-9a9b-8eb00d6c0f02","Type":"ContainerStarted","Data":"e64a9e64390ef4b26913fe0c1ca50fed222102577a6f8dabe0bf3c76efef1716"} Dec 06 09:02:25 crc kubenswrapper[4895]: I1206 09:02:25.926186 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:25 crc kubenswrapper[4895]: I1206 09:02:25.964307 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" podStartSLOduration=2.964289273 podStartE2EDuration="2.964289273s" podCreationTimestamp="2025-12-06 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:02:25.941060079 +0000 UTC m=+7508.342448949" watchObservedRunningTime="2025-12-06 09:02:25.964289273 +0000 UTC m=+7508.365678143" Dec 06 09:02:25 crc kubenswrapper[4895]: I1206 09:02:25.970281 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c84565b98-jp5wp" podStartSLOduration=2.970263583 podStartE2EDuration="2.970263583s" podCreationTimestamp="2025-12-06 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:02:25.960775589 +0000 UTC m=+7508.362164469" watchObservedRunningTime="2025-12-06 09:02:25.970263583 +0000 UTC m=+7508.371652453" Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.941192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5df995578f-vqnnt" event={"ID":"8d9e34be-d2b3-4321-adc5-77ed0d2acfad","Type":"ContainerStarted","Data":"26bec3189b0655f52b47648470bb1526b1425a9e0078e21c278e690249ad2675"} Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.941688 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5df995578f-vqnnt" event={"ID":"8d9e34be-d2b3-4321-adc5-77ed0d2acfad","Type":"ContainerStarted","Data":"5eb69718dd2fc724d62ea17e35cb40fe38d2fa44dd94a869d788286407bf4e10"} Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.949129 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" event={"ID":"fbb03056-067e-4d12-9ea3-3133d9ac3220","Type":"ContainerStarted","Data":"2a455b4126d871a6bd48caf0b914a3d5092e775e855eaca6fe49d5710f2ee10a"} Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.949199 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" event={"ID":"fbb03056-067e-4d12-9ea3-3133d9ac3220","Type":"ContainerStarted","Data":"ba3dbfd1d6dd636480e5735aa414ed13e766a36fdb5bc0bd919ceec53d71a74b"} Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.949550 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.986782 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5df995578f-vqnnt" podStartSLOduration=2.08613791 podStartE2EDuration="3.986751053s" podCreationTimestamp="2025-12-06 09:02:23 +0000 UTC" firstStartedPulling="2025-12-06 09:02:24.102272208 +0000 UTC m=+7506.503661078" lastFinishedPulling="2025-12-06 09:02:26.002885331 +0000 UTC m=+7508.404274221" observedRunningTime="2025-12-06 09:02:26.961233548 +0000 UTC m=+7509.362622478" watchObservedRunningTime="2025-12-06 09:02:26.986751053 +0000 UTC m=+7509.388139963" Dec 06 09:02:26 crc kubenswrapper[4895]: I1206 09:02:26.999766 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-55f6676c94-q78bk" podStartSLOduration=2.050119373 podStartE2EDuration="3.999735152s" podCreationTimestamp="2025-12-06 09:02:23 +0000 UTC" firstStartedPulling="2025-12-06 09:02:24.049608343 +0000 UTC m=+7506.450997213" lastFinishedPulling="2025-12-06 09:02:25.999224122 +0000 UTC m=+7508.400612992" observedRunningTime="2025-12-06 09:02:26.998317344 +0000 UTC m=+7509.399706214" watchObservedRunningTime="2025-12-06 09:02:26.999735152 +0000 UTC m=+7509.401124032" Dec 06 09:02:33 crc kubenswrapper[4895]: I1206 09:02:33.742638 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:02:33 crc kubenswrapper[4895]: I1206 09:02:33.825180 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cd747fc9-878kn"] Dec 06 09:02:33 crc kubenswrapper[4895]: I1206 09:02:33.825870 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="dnsmasq-dns" containerID="cri-o://5598e8c14406d5800dac949b7c951dc7501cfcad880a0093ba137042ec9b5dbc" gracePeriod=10 Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.047952 4895 generic.go:334] "Generic (PLEG): container finished" podID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerID="5598e8c14406d5800dac949b7c951dc7501cfcad880a0093ba137042ec9b5dbc" exitCode=0 Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.048036 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" event={"ID":"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9","Type":"ContainerDied","Data":"5598e8c14406d5800dac949b7c951dc7501cfcad880a0093ba137042ec9b5dbc"} Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.056766 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.34:5353: connect: connection refused" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.377082 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.466820 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-nb\") pod \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.466891 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2m5g\" (UniqueName: \"kubernetes.io/projected/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-kube-api-access-z2m5g\") pod \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.466911 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-dns-svc\") pod \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.467004 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-config\") pod \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.467117 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-sb\") pod \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\" (UID: \"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9\") " Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.472329 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-kube-api-access-z2m5g" (OuterVolumeSpecName: "kube-api-access-z2m5g") pod "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" (UID: "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9"). InnerVolumeSpecName "kube-api-access-z2m5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.516156 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-config" (OuterVolumeSpecName: "config") pod "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" (UID: "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.518578 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" (UID: "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.542073 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" (UID: "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.568879 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.568909 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2m5g\" (UniqueName: \"kubernetes.io/projected/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-kube-api-access-z2m5g\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.568921 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.568931 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.568965 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" (UID: "5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:34 crc kubenswrapper[4895]: I1206 09:02:34.670503 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.062576 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" event={"ID":"5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9","Type":"ContainerDied","Data":"abe0c0d54827ae87ad923ce331b6317dbf3890b8bb4ce838c1c9da9f0f6291ec"} Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.062633 4895 scope.go:117] "RemoveContainer" containerID="5598e8c14406d5800dac949b7c951dc7501cfcad880a0093ba137042ec9b5dbc" Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.062651 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cd747fc9-878kn" Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.109858 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cd747fc9-878kn"] Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.110300 4895 scope.go:117] "RemoveContainer" containerID="ed99bbdcffa8d7f2003a09cbd95eaa574a555e0dff76a4a5a8171ddb9c832b24" Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.119118 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77cd747fc9-878kn"] Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.481432 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:35 crc kubenswrapper[4895]: I1206 09:02:35.538503 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c84565b98-jp5wp" Dec 06 09:02:36 crc kubenswrapper[4895]: I1206 09:02:36.051215 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:02:36 crc kubenswrapper[4895]: E1206 09:02:36.051826 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:02:36 crc kubenswrapper[4895]: I1206 09:02:36.062118 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" path="/var/lib/kubelet/pods/5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9/volumes" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.250913 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-k9fbb"] Dec 06 09:02:42 crc kubenswrapper[4895]: E1206 09:02:42.251791 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="init" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.251805 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="init" Dec 06 09:02:42 crc kubenswrapper[4895]: E1206 09:02:42.251823 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="dnsmasq-dns" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.251829 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="dnsmasq-dns" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.251986 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a449e9f-1b28-4dfc-a8d1-71dc342c8ca9" containerName="dnsmasq-dns" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.252616 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.264893 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-k9fbb"] Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.347842 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2752-account-create-update-hcfbv"] Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.349433 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.353238 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.357695 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2752-account-create-update-hcfbv"] Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.417550 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c6a462-5b00-4390-a172-1cabe4d68a37-operator-scripts\") pod \"neutron-db-create-k9fbb\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.417633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5896\" (UniqueName: \"kubernetes.io/projected/66c6a462-5b00-4390-a172-1cabe4d68a37-kube-api-access-s5896\") pod \"neutron-db-create-k9fbb\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.519589 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c6a462-5b00-4390-a172-1cabe4d68a37-operator-scripts\") pod \"neutron-db-create-k9fbb\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.519638 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8mp\" (UniqueName: \"kubernetes.io/projected/630be3a5-6987-49ca-85a2-025e93a1ae43-kube-api-access-hk8mp\") pod \"neutron-2752-account-create-update-hcfbv\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.519691 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5896\" (UniqueName: \"kubernetes.io/projected/66c6a462-5b00-4390-a172-1cabe4d68a37-kube-api-access-s5896\") pod \"neutron-db-create-k9fbb\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.519712 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/630be3a5-6987-49ca-85a2-025e93a1ae43-operator-scripts\") pod \"neutron-2752-account-create-update-hcfbv\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.520366 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c6a462-5b00-4390-a172-1cabe4d68a37-operator-scripts\") pod \"neutron-db-create-k9fbb\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.554619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5896\" (UniqueName: \"kubernetes.io/projected/66c6a462-5b00-4390-a172-1cabe4d68a37-kube-api-access-s5896\") pod \"neutron-db-create-k9fbb\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.572383 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.622565 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk8mp\" (UniqueName: \"kubernetes.io/projected/630be3a5-6987-49ca-85a2-025e93a1ae43-kube-api-access-hk8mp\") pod \"neutron-2752-account-create-update-hcfbv\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.622644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/630be3a5-6987-49ca-85a2-025e93a1ae43-operator-scripts\") pod \"neutron-2752-account-create-update-hcfbv\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.623336 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/630be3a5-6987-49ca-85a2-025e93a1ae43-operator-scripts\") pod \"neutron-2752-account-create-update-hcfbv\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.651462 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk8mp\" (UniqueName: \"kubernetes.io/projected/630be3a5-6987-49ca-85a2-025e93a1ae43-kube-api-access-hk8mp\") pod \"neutron-2752-account-create-update-hcfbv\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:42 crc kubenswrapper[4895]: I1206 09:02:42.667935 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:43 crc kubenswrapper[4895]: W1206 09:02:43.095872 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66c6a462_5b00_4390_a172_1cabe4d68a37.slice/crio-6ccd735de38f3be00566637d71db931897ee9c08ef2ed7358ce3a86a0a601a5d WatchSource:0}: Error finding container 6ccd735de38f3be00566637d71db931897ee9c08ef2ed7358ce3a86a0a601a5d: Status 404 returned error can't find the container with id 6ccd735de38f3be00566637d71db931897ee9c08ef2ed7358ce3a86a0a601a5d Dec 06 09:02:43 crc kubenswrapper[4895]: I1206 09:02:43.101672 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-k9fbb"] Dec 06 09:02:43 crc kubenswrapper[4895]: I1206 09:02:43.128089 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-k9fbb" event={"ID":"66c6a462-5b00-4390-a172-1cabe4d68a37","Type":"ContainerStarted","Data":"6ccd735de38f3be00566637d71db931897ee9c08ef2ed7358ce3a86a0a601a5d"} Dec 06 09:02:43 crc kubenswrapper[4895]: I1206 09:02:43.193003 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2752-account-create-update-hcfbv"] Dec 06 09:02:43 crc kubenswrapper[4895]: W1206 09:02:43.198647 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod630be3a5_6987_49ca_85a2_025e93a1ae43.slice/crio-cd89d49fbacfbdeb72d2fc429fda8be86c42a0ae61c6df920a52053923da3439 WatchSource:0}: Error finding container cd89d49fbacfbdeb72d2fc429fda8be86c42a0ae61c6df920a52053923da3439: Status 404 returned error can't find the container with id cd89d49fbacfbdeb72d2fc429fda8be86c42a0ae61c6df920a52053923da3439 Dec 06 09:02:44 crc kubenswrapper[4895]: I1206 09:02:44.137746 4895 generic.go:334] "Generic (PLEG): container finished" podID="630be3a5-6987-49ca-85a2-025e93a1ae43" containerID="c1d1c0631c35156bf85b18747ad5cb8ee807314b7ab4e8eb4d5cf4a73b8fb981" exitCode=0 Dec 06 09:02:44 crc kubenswrapper[4895]: I1206 09:02:44.137843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2752-account-create-update-hcfbv" event={"ID":"630be3a5-6987-49ca-85a2-025e93a1ae43","Type":"ContainerDied","Data":"c1d1c0631c35156bf85b18747ad5cb8ee807314b7ab4e8eb4d5cf4a73b8fb981"} Dec 06 09:02:44 crc kubenswrapper[4895]: I1206 09:02:44.138180 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2752-account-create-update-hcfbv" event={"ID":"630be3a5-6987-49ca-85a2-025e93a1ae43","Type":"ContainerStarted","Data":"cd89d49fbacfbdeb72d2fc429fda8be86c42a0ae61c6df920a52053923da3439"} Dec 06 09:02:44 crc kubenswrapper[4895]: I1206 09:02:44.139885 4895 generic.go:334] "Generic (PLEG): container finished" podID="66c6a462-5b00-4390-a172-1cabe4d68a37" containerID="701f968e6ac3b6b45211497296591fb20e81ff0da28d2d55a0296adf8e8dc0e4" exitCode=0 Dec 06 09:02:44 crc kubenswrapper[4895]: I1206 09:02:44.139938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-k9fbb" event={"ID":"66c6a462-5b00-4390-a172-1cabe4d68a37","Type":"ContainerDied","Data":"701f968e6ac3b6b45211497296591fb20e81ff0da28d2d55a0296adf8e8dc0e4"} Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.553694 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.560547 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.674361 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/630be3a5-6987-49ca-85a2-025e93a1ae43-operator-scripts\") pod \"630be3a5-6987-49ca-85a2-025e93a1ae43\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.674513 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5896\" (UniqueName: \"kubernetes.io/projected/66c6a462-5b00-4390-a172-1cabe4d68a37-kube-api-access-s5896\") pod \"66c6a462-5b00-4390-a172-1cabe4d68a37\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.674625 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk8mp\" (UniqueName: \"kubernetes.io/projected/630be3a5-6987-49ca-85a2-025e93a1ae43-kube-api-access-hk8mp\") pod \"630be3a5-6987-49ca-85a2-025e93a1ae43\" (UID: \"630be3a5-6987-49ca-85a2-025e93a1ae43\") " Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.674699 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c6a462-5b00-4390-a172-1cabe4d68a37-operator-scripts\") pod \"66c6a462-5b00-4390-a172-1cabe4d68a37\" (UID: \"66c6a462-5b00-4390-a172-1cabe4d68a37\") " Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.674811 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630be3a5-6987-49ca-85a2-025e93a1ae43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "630be3a5-6987-49ca-85a2-025e93a1ae43" (UID: "630be3a5-6987-49ca-85a2-025e93a1ae43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.675090 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/630be3a5-6987-49ca-85a2-025e93a1ae43-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.675618 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c6a462-5b00-4390-a172-1cabe4d68a37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66c6a462-5b00-4390-a172-1cabe4d68a37" (UID: "66c6a462-5b00-4390-a172-1cabe4d68a37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.679770 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630be3a5-6987-49ca-85a2-025e93a1ae43-kube-api-access-hk8mp" (OuterVolumeSpecName: "kube-api-access-hk8mp") pod "630be3a5-6987-49ca-85a2-025e93a1ae43" (UID: "630be3a5-6987-49ca-85a2-025e93a1ae43"). InnerVolumeSpecName "kube-api-access-hk8mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.687806 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c6a462-5b00-4390-a172-1cabe4d68a37-kube-api-access-s5896" (OuterVolumeSpecName: "kube-api-access-s5896") pod "66c6a462-5b00-4390-a172-1cabe4d68a37" (UID: "66c6a462-5b00-4390-a172-1cabe4d68a37"). InnerVolumeSpecName "kube-api-access-s5896". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.777501 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5896\" (UniqueName: \"kubernetes.io/projected/66c6a462-5b00-4390-a172-1cabe4d68a37-kube-api-access-s5896\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.777539 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk8mp\" (UniqueName: \"kubernetes.io/projected/630be3a5-6987-49ca-85a2-025e93a1ae43-kube-api-access-hk8mp\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:45.777553 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c6a462-5b00-4390-a172-1cabe4d68a37-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:46.158869 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2752-account-create-update-hcfbv" event={"ID":"630be3a5-6987-49ca-85a2-025e93a1ae43","Type":"ContainerDied","Data":"cd89d49fbacfbdeb72d2fc429fda8be86c42a0ae61c6df920a52053923da3439"} Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:46.159323 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd89d49fbacfbdeb72d2fc429fda8be86c42a0ae61c6df920a52053923da3439" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:46.158912 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2752-account-create-update-hcfbv" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:46.160421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-k9fbb" event={"ID":"66c6a462-5b00-4390-a172-1cabe4d68a37","Type":"ContainerDied","Data":"6ccd735de38f3be00566637d71db931897ee9c08ef2ed7358ce3a86a0a601a5d"} Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:46.160462 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-k9fbb" Dec 06 09:02:46 crc kubenswrapper[4895]: I1206 09:02:46.160501 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccd735de38f3be00566637d71db931897ee9c08ef2ed7358ce3a86a0a601a5d" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.548024 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-42mmj"] Dec 06 09:02:47 crc kubenswrapper[4895]: E1206 09:02:47.548870 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c6a462-5b00-4390-a172-1cabe4d68a37" containerName="mariadb-database-create" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.548887 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c6a462-5b00-4390-a172-1cabe4d68a37" containerName="mariadb-database-create" Dec 06 09:02:47 crc kubenswrapper[4895]: E1206 09:02:47.548921 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630be3a5-6987-49ca-85a2-025e93a1ae43" containerName="mariadb-account-create-update" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.548928 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="630be3a5-6987-49ca-85a2-025e93a1ae43" containerName="mariadb-account-create-update" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.549283 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c6a462-5b00-4390-a172-1cabe4d68a37" containerName="mariadb-database-create" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.549315 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="630be3a5-6987-49ca-85a2-025e93a1ae43" containerName="mariadb-account-create-update" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.550211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.553507 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.553850 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.558032 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xzplj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.579622 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-42mmj"] Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.609608 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrd69\" (UniqueName: \"kubernetes.io/projected/ba49232f-1681-4743-8d42-d264a39df476-kube-api-access-mrd69\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.609878 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-combined-ca-bundle\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.609987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-config\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.712071 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-combined-ca-bundle\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.712160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-config\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.712263 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrd69\" (UniqueName: \"kubernetes.io/projected/ba49232f-1681-4743-8d42-d264a39df476-kube-api-access-mrd69\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.719680 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-combined-ca-bundle\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.720253 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-config\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.732986 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrd69\" (UniqueName: \"kubernetes.io/projected/ba49232f-1681-4743-8d42-d264a39df476-kube-api-access-mrd69\") pod \"neutron-db-sync-42mmj\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:47 crc kubenswrapper[4895]: I1206 09:02:47.916253 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:48 crc kubenswrapper[4895]: I1206 09:02:48.060641 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:02:48 crc kubenswrapper[4895]: E1206 09:02:48.062946 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:02:48 crc kubenswrapper[4895]: I1206 09:02:48.363666 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-42mmj"] Dec 06 09:02:49 crc kubenswrapper[4895]: I1206 09:02:49.187249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-42mmj" event={"ID":"ba49232f-1681-4743-8d42-d264a39df476","Type":"ContainerStarted","Data":"e959fcfefc47930b241d68d8923f20ec3c09b8e0d6f6805ca8e8988710941866"} Dec 06 09:02:49 crc kubenswrapper[4895]: I1206 09:02:49.187705 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-42mmj" event={"ID":"ba49232f-1681-4743-8d42-d264a39df476","Type":"ContainerStarted","Data":"03abd5a76d03af11174694e93e60d9c1466a54b5d2712d9ac793d37901a16437"} Dec 06 09:02:49 crc kubenswrapper[4895]: I1206 09:02:49.241889 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-42mmj" podStartSLOduration=2.241869137 podStartE2EDuration="2.241869137s" podCreationTimestamp="2025-12-06 09:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:02:49.239560266 +0000 UTC m=+7531.640949156" watchObservedRunningTime="2025-12-06 09:02:49.241869137 +0000 UTC m=+7531.643258007" Dec 06 09:02:53 crc kubenswrapper[4895]: I1206 09:02:53.224604 4895 generic.go:334] "Generic (PLEG): container finished" podID="ba49232f-1681-4743-8d42-d264a39df476" containerID="e959fcfefc47930b241d68d8923f20ec3c09b8e0d6f6805ca8e8988710941866" exitCode=0 Dec 06 09:02:53 crc kubenswrapper[4895]: I1206 09:02:53.224705 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-42mmj" event={"ID":"ba49232f-1681-4743-8d42-d264a39df476","Type":"ContainerDied","Data":"e959fcfefc47930b241d68d8923f20ec3c09b8e0d6f6805ca8e8988710941866"} Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.562647 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.631887 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-combined-ca-bundle\") pod \"ba49232f-1681-4743-8d42-d264a39df476\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.631945 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrd69\" (UniqueName: \"kubernetes.io/projected/ba49232f-1681-4743-8d42-d264a39df476-kube-api-access-mrd69\") pod \"ba49232f-1681-4743-8d42-d264a39df476\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.632116 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-config\") pod \"ba49232f-1681-4743-8d42-d264a39df476\" (UID: \"ba49232f-1681-4743-8d42-d264a39df476\") " Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.637315 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba49232f-1681-4743-8d42-d264a39df476-kube-api-access-mrd69" (OuterVolumeSpecName: "kube-api-access-mrd69") pod "ba49232f-1681-4743-8d42-d264a39df476" (UID: "ba49232f-1681-4743-8d42-d264a39df476"). InnerVolumeSpecName "kube-api-access-mrd69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.661265 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba49232f-1681-4743-8d42-d264a39df476" (UID: "ba49232f-1681-4743-8d42-d264a39df476"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.662995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-config" (OuterVolumeSpecName: "config") pod "ba49232f-1681-4743-8d42-d264a39df476" (UID: "ba49232f-1681-4743-8d42-d264a39df476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.734065 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.734368 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba49232f-1681-4743-8d42-d264a39df476-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:54 crc kubenswrapper[4895]: I1206 09:02:54.734458 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrd69\" (UniqueName: \"kubernetes.io/projected/ba49232f-1681-4743-8d42-d264a39df476-kube-api-access-mrd69\") on node \"crc\" DevicePath \"\"" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.246277 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-42mmj" event={"ID":"ba49232f-1681-4743-8d42-d264a39df476","Type":"ContainerDied","Data":"03abd5a76d03af11174694e93e60d9c1466a54b5d2712d9ac793d37901a16437"} Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.246587 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-42mmj" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.246618 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03abd5a76d03af11174694e93e60d9c1466a54b5d2712d9ac793d37901a16437" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.510712 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c55bc4bd7-m5ng5"] Dec 06 09:02:55 crc kubenswrapper[4895]: E1206 09:02:55.511360 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba49232f-1681-4743-8d42-d264a39df476" containerName="neutron-db-sync" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.511381 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba49232f-1681-4743-8d42-d264a39df476" containerName="neutron-db-sync" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.511587 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba49232f-1681-4743-8d42-d264a39df476" containerName="neutron-db-sync" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.512541 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.523745 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c55bc4bd7-m5ng5"] Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.550349 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-dns-svc\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.550510 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.550687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.550754 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr48j\" (UniqueName: \"kubernetes.io/projected/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-kube-api-access-lr48j\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.550855 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-config\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.573099 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74b4d66fbf-fvwqh"] Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.575578 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.578901 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.578973 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xzplj" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.581061 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.595252 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74b4d66fbf-fvwqh"] Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652561 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-config\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652633 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-dns-svc\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652664 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-httpd-config\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652685 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-config\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652716 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkt2q\" (UniqueName: \"kubernetes.io/projected/89893123-c233-4bec-9663-74645c53e8a8-kube-api-access-nkt2q\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652739 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652788 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-combined-ca-bundle\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.652828 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr48j\" (UniqueName: \"kubernetes.io/projected/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-kube-api-access-lr48j\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.653593 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-config\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.653826 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-dns-svc\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.654272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.654446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.672584 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr48j\" (UniqueName: \"kubernetes.io/projected/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-kube-api-access-lr48j\") pod \"dnsmasq-dns-5c55bc4bd7-m5ng5\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.754523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-httpd-config\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.754587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-config\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.754625 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkt2q\" (UniqueName: \"kubernetes.io/projected/89893123-c233-4bec-9663-74645c53e8a8-kube-api-access-nkt2q\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.754692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-combined-ca-bundle\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.757908 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-httpd-config\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.758518 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-combined-ca-bundle\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.760816 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/89893123-c233-4bec-9663-74645c53e8a8-config\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.770024 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkt2q\" (UniqueName: \"kubernetes.io/projected/89893123-c233-4bec-9663-74645c53e8a8-kube-api-access-nkt2q\") pod \"neutron-74b4d66fbf-fvwqh\" (UID: \"89893123-c233-4bec-9663-74645c53e8a8\") " pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.832651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:55 crc kubenswrapper[4895]: I1206 09:02:55.897415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:56 crc kubenswrapper[4895]: I1206 09:02:56.301956 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c55bc4bd7-m5ng5"] Dec 06 09:02:56 crc kubenswrapper[4895]: I1206 09:02:56.501050 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74b4d66fbf-fvwqh"] Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.273066 4895 generic.go:334] "Generic (PLEG): container finished" podID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerID="fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d" exitCode=0 Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.273134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" event={"ID":"3fece91d-36f5-46c4-a2d1-17820a9e9bd6","Type":"ContainerDied","Data":"fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d"} Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.273602 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" event={"ID":"3fece91d-36f5-46c4-a2d1-17820a9e9bd6","Type":"ContainerStarted","Data":"83a390d9a67e9a1fc2d18f0428328585c1da0566f0e5968e14464458d5ffce05"} Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.276379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74b4d66fbf-fvwqh" event={"ID":"89893123-c233-4bec-9663-74645c53e8a8","Type":"ContainerStarted","Data":"9d5b3ce349cc23134bc647c627df5b849052e5802141b3d078e68ff23721de5f"} Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.276440 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74b4d66fbf-fvwqh" event={"ID":"89893123-c233-4bec-9663-74645c53e8a8","Type":"ContainerStarted","Data":"542bf4ce2e9c3eb8ea4acd9ccb8e9601a7c44f37fde9b5b72eb83de1feb060a1"} Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.276456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74b4d66fbf-fvwqh" event={"ID":"89893123-c233-4bec-9663-74645c53e8a8","Type":"ContainerStarted","Data":"c2c7d25c459d0e5296131fea21158c7c8c9cfeeb255eac26d0b36b2474becb5a"} Dec 06 09:02:57 crc kubenswrapper[4895]: I1206 09:02:57.276587 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:02:58 crc kubenswrapper[4895]: I1206 09:02:58.287396 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" event={"ID":"3fece91d-36f5-46c4-a2d1-17820a9e9bd6","Type":"ContainerStarted","Data":"0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd"} Dec 06 09:02:58 crc kubenswrapper[4895]: I1206 09:02:58.288436 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:02:58 crc kubenswrapper[4895]: I1206 09:02:58.306518 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74b4d66fbf-fvwqh" podStartSLOduration=3.306500191 podStartE2EDuration="3.306500191s" podCreationTimestamp="2025-12-06 09:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:02:57.333099669 +0000 UTC m=+7539.734488529" watchObservedRunningTime="2025-12-06 09:02:58.306500191 +0000 UTC m=+7540.707889061" Dec 06 09:02:58 crc kubenswrapper[4895]: I1206 09:02:58.309556 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" podStartSLOduration=3.309546952 podStartE2EDuration="3.309546952s" podCreationTimestamp="2025-12-06 09:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:02:58.300816058 +0000 UTC m=+7540.702204938" watchObservedRunningTime="2025-12-06 09:02:58.309546952 +0000 UTC m=+7540.710935822" Dec 06 09:02:59 crc kubenswrapper[4895]: I1206 09:02:59.051206 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:02:59 crc kubenswrapper[4895]: E1206 09:02:59.051720 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:03:05 crc kubenswrapper[4895]: I1206 09:03:05.834821 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:03:05 crc kubenswrapper[4895]: I1206 09:03:05.918706 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84687cd4ff-2xn2c"] Dec 06 09:03:05 crc kubenswrapper[4895]: I1206 09:03:05.919053 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerName="dnsmasq-dns" containerID="cri-o://8cfb19f9f40d09efa8bff5f801e8893e40f548f6fe3935e0ba288f65b9627738" gracePeriod=10 Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.371306 4895 generic.go:334] "Generic (PLEG): container finished" podID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerID="8cfb19f9f40d09efa8bff5f801e8893e40f548f6fe3935e0ba288f65b9627738" exitCode=0 Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.371405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" event={"ID":"5e1dd916-647a-488e-91e4-1cae7d9a12e8","Type":"ContainerDied","Data":"8cfb19f9f40d09efa8bff5f801e8893e40f548f6fe3935e0ba288f65b9627738"} Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.454933 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.575313 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-dns-svc\") pod \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.575505 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-nb\") pod \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.575545 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n72b\" (UniqueName: \"kubernetes.io/projected/5e1dd916-647a-488e-91e4-1cae7d9a12e8-kube-api-access-9n72b\") pod \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.575577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config\") pod \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.575673 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-sb\") pod \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.586336 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1dd916-647a-488e-91e4-1cae7d9a12e8-kube-api-access-9n72b" (OuterVolumeSpecName: "kube-api-access-9n72b") pod "5e1dd916-647a-488e-91e4-1cae7d9a12e8" (UID: "5e1dd916-647a-488e-91e4-1cae7d9a12e8"). InnerVolumeSpecName "kube-api-access-9n72b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.616253 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e1dd916-647a-488e-91e4-1cae7d9a12e8" (UID: "5e1dd916-647a-488e-91e4-1cae7d9a12e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.629321 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e1dd916-647a-488e-91e4-1cae7d9a12e8" (UID: "5e1dd916-647a-488e-91e4-1cae7d9a12e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.677931 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.677977 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n72b\" (UniqueName: \"kubernetes.io/projected/5e1dd916-647a-488e-91e4-1cae7d9a12e8-kube-api-access-9n72b\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:06 crc kubenswrapper[4895]: I1206 09:03:06.678012 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.385136 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" event={"ID":"5e1dd916-647a-488e-91e4-1cae7d9a12e8","Type":"ContainerDied","Data":"b694c93b077b07b62f86a597c9dc43c159e3ad30c2b63b42a098475629df3af8"} Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.385184 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84687cd4ff-2xn2c" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.385200 4895 scope.go:117] "RemoveContainer" containerID="8cfb19f9f40d09efa8bff5f801e8893e40f548f6fe3935e0ba288f65b9627738" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.416255 4895 scope.go:117] "RemoveContainer" containerID="21a762a97f4c48f9d9de37be99734d8a2cca79d2089a77057de321049cf1ba64" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.794126 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1b3959bd-4eca-4e06-b552-7217aa74f883" containerName="galera" probeResult="failure" output="command timed out" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.794888 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1b3959bd-4eca-4e06-b552-7217aa74f883" containerName="galera" probeResult="failure" output="command timed out" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.965729 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" podUID="5417e33f-dead-459e-933b-58ad3ae2da48" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:03:07 crc kubenswrapper[4895]: I1206 09:03:07.965831 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qk4xg" podUID="5417e33f-dead-459e-933b-58ad3ae2da48" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:03:08 crc kubenswrapper[4895]: E1206 09:03:08.051817 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config podName:5e1dd916-647a-488e-91e4-1cae7d9a12e8 nodeName:}" failed. No retries permitted until 2025-12-06 09:03:08.55178153 +0000 UTC m=+7550.953170420 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config") pod "5e1dd916-647a-488e-91e4-1cae7d9a12e8" (UID: "5e1dd916-647a-488e-91e4-1cae7d9a12e8") : error deleting /var/lib/kubelet/pods/5e1dd916-647a-488e-91e4-1cae7d9a12e8/volume-subpaths: remove /var/lib/kubelet/pods/5e1dd916-647a-488e-91e4-1cae7d9a12e8/volume-subpaths: no such file or directory Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.052567 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e1dd916-647a-488e-91e4-1cae7d9a12e8" (UID: "5e1dd916-647a-488e-91e4-1cae7d9a12e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.103046 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.610566 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config\") pod \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\" (UID: \"5e1dd916-647a-488e-91e4-1cae7d9a12e8\") " Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.611019 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config" (OuterVolumeSpecName: "config") pod "5e1dd916-647a-488e-91e4-1cae7d9a12e8" (UID: "5e1dd916-647a-488e-91e4-1cae7d9a12e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.611435 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e1dd916-647a-488e-91e4-1cae7d9a12e8-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.920633 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84687cd4ff-2xn2c"] Dec 06 09:03:08 crc kubenswrapper[4895]: I1206 09:03:08.929458 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84687cd4ff-2xn2c"] Dec 06 09:03:10 crc kubenswrapper[4895]: I1206 09:03:10.063604 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" path="/var/lib/kubelet/pods/5e1dd916-647a-488e-91e4-1cae7d9a12e8/volumes" Dec 06 09:03:13 crc kubenswrapper[4895]: I1206 09:03:13.050339 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:03:13 crc kubenswrapper[4895]: E1206 09:03:13.050955 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:03:25 crc kubenswrapper[4895]: I1206 09:03:25.050398 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:03:25 crc kubenswrapper[4895]: E1206 09:03:25.051344 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:03:25 crc kubenswrapper[4895]: I1206 09:03:25.918066 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74b4d66fbf-fvwqh" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.093101 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8mbl9"] Dec 06 09:03:33 crc kubenswrapper[4895]: E1206 09:03:33.093687 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerName="dnsmasq-dns" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.093699 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerName="dnsmasq-dns" Dec 06 09:03:33 crc kubenswrapper[4895]: E1206 09:03:33.093717 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerName="init" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.093723 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerName="init" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.093909 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e1dd916-647a-488e-91e4-1cae7d9a12e8" containerName="dnsmasq-dns" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.094490 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.107545 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8mbl9"] Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.183018 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-75f1-account-create-update-zz5tm"] Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.184557 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.186280 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.192856 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-75f1-account-create-update-zz5tm"] Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.243810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52454b44-71f8-41a4-ac75-90812c004863-operator-scripts\") pod \"glance-db-create-8mbl9\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.244018 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjk2\" (UniqueName: \"kubernetes.io/projected/52454b44-71f8-41a4-ac75-90812c004863-kube-api-access-pwjk2\") pod \"glance-db-create-8mbl9\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.344981 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52454b44-71f8-41a4-ac75-90812c004863-operator-scripts\") pod \"glance-db-create-8mbl9\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.345070 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq9ts\" (UniqueName: \"kubernetes.io/projected/5be403b2-d441-469e-9b7c-3180922cf7df-kube-api-access-qq9ts\") pod \"glance-75f1-account-create-update-zz5tm\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.345115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjk2\" (UniqueName: \"kubernetes.io/projected/52454b44-71f8-41a4-ac75-90812c004863-kube-api-access-pwjk2\") pod \"glance-db-create-8mbl9\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.345134 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be403b2-d441-469e-9b7c-3180922cf7df-operator-scripts\") pod \"glance-75f1-account-create-update-zz5tm\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.345779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52454b44-71f8-41a4-ac75-90812c004863-operator-scripts\") pod \"glance-db-create-8mbl9\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.372932 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjk2\" (UniqueName: \"kubernetes.io/projected/52454b44-71f8-41a4-ac75-90812c004863-kube-api-access-pwjk2\") pod \"glance-db-create-8mbl9\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.411179 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.446574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq9ts\" (UniqueName: \"kubernetes.io/projected/5be403b2-d441-469e-9b7c-3180922cf7df-kube-api-access-qq9ts\") pod \"glance-75f1-account-create-update-zz5tm\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.446639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be403b2-d441-469e-9b7c-3180922cf7df-operator-scripts\") pod \"glance-75f1-account-create-update-zz5tm\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.447363 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be403b2-d441-469e-9b7c-3180922cf7df-operator-scripts\") pod \"glance-75f1-account-create-update-zz5tm\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.466462 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq9ts\" (UniqueName: \"kubernetes.io/projected/5be403b2-d441-469e-9b7c-3180922cf7df-kube-api-access-qq9ts\") pod \"glance-75f1-account-create-update-zz5tm\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.507982 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:33 crc kubenswrapper[4895]: I1206 09:03:33.881660 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8mbl9"] Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.063772 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-75f1-account-create-update-zz5tm"] Dec 06 09:03:34 crc kubenswrapper[4895]: W1206 09:03:34.074243 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5be403b2_d441_469e_9b7c_3180922cf7df.slice/crio-ff181de2e5db38ec89924ddf90ba36004ee9dc9ce83bc3d25632aad7b8832832 WatchSource:0}: Error finding container ff181de2e5db38ec89924ddf90ba36004ee9dc9ce83bc3d25632aad7b8832832: Status 404 returned error can't find the container with id ff181de2e5db38ec89924ddf90ba36004ee9dc9ce83bc3d25632aad7b8832832 Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.640670 4895 generic.go:334] "Generic (PLEG): container finished" podID="5be403b2-d441-469e-9b7c-3180922cf7df" containerID="efd4a6f1ede2cf836e3b4205d1ae6124b3c4bf7a6e49d7d2c407a1a4b7e35da4" exitCode=0 Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.641129 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-75f1-account-create-update-zz5tm" event={"ID":"5be403b2-d441-469e-9b7c-3180922cf7df","Type":"ContainerDied","Data":"efd4a6f1ede2cf836e3b4205d1ae6124b3c4bf7a6e49d7d2c407a1a4b7e35da4"} Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.641161 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-75f1-account-create-update-zz5tm" event={"ID":"5be403b2-d441-469e-9b7c-3180922cf7df","Type":"ContainerStarted","Data":"ff181de2e5db38ec89924ddf90ba36004ee9dc9ce83bc3d25632aad7b8832832"} Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.644638 4895 generic.go:334] "Generic (PLEG): container finished" podID="52454b44-71f8-41a4-ac75-90812c004863" containerID="9b23613cd560f5a8925dc5a93974914a500cff1de24176756529f87b7f6401d3" exitCode=0 Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.644668 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8mbl9" event={"ID":"52454b44-71f8-41a4-ac75-90812c004863","Type":"ContainerDied","Data":"9b23613cd560f5a8925dc5a93974914a500cff1de24176756529f87b7f6401d3"} Dec 06 09:03:34 crc kubenswrapper[4895]: I1206 09:03:34.644686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8mbl9" event={"ID":"52454b44-71f8-41a4-ac75-90812c004863","Type":"ContainerStarted","Data":"503ff774d3bcc7640755108ec62ea439065cd390ce3c7dc81e22ad4c648d3453"} Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.021047 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.026210 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.190240 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq9ts\" (UniqueName: \"kubernetes.io/projected/5be403b2-d441-469e-9b7c-3180922cf7df-kube-api-access-qq9ts\") pod \"5be403b2-d441-469e-9b7c-3180922cf7df\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.190326 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwjk2\" (UniqueName: \"kubernetes.io/projected/52454b44-71f8-41a4-ac75-90812c004863-kube-api-access-pwjk2\") pod \"52454b44-71f8-41a4-ac75-90812c004863\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.190526 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52454b44-71f8-41a4-ac75-90812c004863-operator-scripts\") pod \"52454b44-71f8-41a4-ac75-90812c004863\" (UID: \"52454b44-71f8-41a4-ac75-90812c004863\") " Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.190547 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be403b2-d441-469e-9b7c-3180922cf7df-operator-scripts\") pod \"5be403b2-d441-469e-9b7c-3180922cf7df\" (UID: \"5be403b2-d441-469e-9b7c-3180922cf7df\") " Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.191796 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52454b44-71f8-41a4-ac75-90812c004863-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52454b44-71f8-41a4-ac75-90812c004863" (UID: "52454b44-71f8-41a4-ac75-90812c004863"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.192212 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be403b2-d441-469e-9b7c-3180922cf7df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5be403b2-d441-469e-9b7c-3180922cf7df" (UID: "5be403b2-d441-469e-9b7c-3180922cf7df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.197009 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be403b2-d441-469e-9b7c-3180922cf7df-kube-api-access-qq9ts" (OuterVolumeSpecName: "kube-api-access-qq9ts") pod "5be403b2-d441-469e-9b7c-3180922cf7df" (UID: "5be403b2-d441-469e-9b7c-3180922cf7df"). InnerVolumeSpecName "kube-api-access-qq9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.197527 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52454b44-71f8-41a4-ac75-90812c004863-kube-api-access-pwjk2" (OuterVolumeSpecName: "kube-api-access-pwjk2") pod "52454b44-71f8-41a4-ac75-90812c004863" (UID: "52454b44-71f8-41a4-ac75-90812c004863"). InnerVolumeSpecName "kube-api-access-pwjk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.292225 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52454b44-71f8-41a4-ac75-90812c004863-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.292263 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be403b2-d441-469e-9b7c-3180922cf7df-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.292272 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq9ts\" (UniqueName: \"kubernetes.io/projected/5be403b2-d441-469e-9b7c-3180922cf7df-kube-api-access-qq9ts\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.292282 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwjk2\" (UniqueName: \"kubernetes.io/projected/52454b44-71f8-41a4-ac75-90812c004863-kube-api-access-pwjk2\") on node \"crc\" DevicePath \"\"" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.667332 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8mbl9" event={"ID":"52454b44-71f8-41a4-ac75-90812c004863","Type":"ContainerDied","Data":"503ff774d3bcc7640755108ec62ea439065cd390ce3c7dc81e22ad4c648d3453"} Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.667413 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503ff774d3bcc7640755108ec62ea439065cd390ce3c7dc81e22ad4c648d3453" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.667357 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mbl9" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.668592 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-75f1-account-create-update-zz5tm" event={"ID":"5be403b2-d441-469e-9b7c-3180922cf7df","Type":"ContainerDied","Data":"ff181de2e5db38ec89924ddf90ba36004ee9dc9ce83bc3d25632aad7b8832832"} Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.668631 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff181de2e5db38ec89924ddf90ba36004ee9dc9ce83bc3d25632aad7b8832832" Dec 06 09:03:36 crc kubenswrapper[4895]: I1206 09:03:36.668638 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75f1-account-create-update-zz5tm" Dec 06 09:03:37 crc kubenswrapper[4895]: I1206 09:03:37.050867 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:03:37 crc kubenswrapper[4895]: I1206 09:03:37.680548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"6a08bae6a77367b44b45e288465887eadb17c31ec1d0ea904bd0118b481a1ef4"} Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.346820 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wtfgv"] Dec 06 09:03:38 crc kubenswrapper[4895]: E1206 09:03:38.363032 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be403b2-d441-469e-9b7c-3180922cf7df" containerName="mariadb-account-create-update" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.363072 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be403b2-d441-469e-9b7c-3180922cf7df" containerName="mariadb-account-create-update" Dec 06 09:03:38 crc kubenswrapper[4895]: E1206 09:03:38.363102 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52454b44-71f8-41a4-ac75-90812c004863" containerName="mariadb-database-create" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.363111 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="52454b44-71f8-41a4-ac75-90812c004863" containerName="mariadb-database-create" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.363366 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="52454b44-71f8-41a4-ac75-90812c004863" containerName="mariadb-database-create" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.363403 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be403b2-d441-469e-9b7c-3180922cf7df" containerName="mariadb-account-create-update" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.364069 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wtfgv"] Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.364164 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.367965 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8t2wf" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.368282 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.537494 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-combined-ca-bundle\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.537542 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcg7\" (UniqueName: \"kubernetes.io/projected/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-kube-api-access-bqcg7\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.537704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-config-data\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.537757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-db-sync-config-data\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.638980 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-combined-ca-bundle\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.639038 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcg7\" (UniqueName: \"kubernetes.io/projected/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-kube-api-access-bqcg7\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.639137 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-config-data\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.639217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-db-sync-config-data\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.646197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-combined-ca-bundle\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.646272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-config-data\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.650606 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-db-sync-config-data\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.667823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcg7\" (UniqueName: \"kubernetes.io/projected/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-kube-api-access-bqcg7\") pod \"glance-db-sync-wtfgv\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:38 crc kubenswrapper[4895]: I1206 09:03:38.689899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtfgv" Dec 06 09:03:39 crc kubenswrapper[4895]: I1206 09:03:39.250768 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wtfgv"] Dec 06 09:03:39 crc kubenswrapper[4895]: W1206 09:03:39.252591 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9d9d645_1f6b_4f06_b0a3_7226d05a0199.slice/crio-bed4fa113a3cfee0ef3e3747ec7d54fbaecc6b06132521548bc4fce1a92f7c1f WatchSource:0}: Error finding container bed4fa113a3cfee0ef3e3747ec7d54fbaecc6b06132521548bc4fce1a92f7c1f: Status 404 returned error can't find the container with id bed4fa113a3cfee0ef3e3747ec7d54fbaecc6b06132521548bc4fce1a92f7c1f Dec 06 09:03:39 crc kubenswrapper[4895]: I1206 09:03:39.698960 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtfgv" event={"ID":"e9d9d645-1f6b-4f06-b0a3-7226d05a0199","Type":"ContainerStarted","Data":"bed4fa113a3cfee0ef3e3747ec7d54fbaecc6b06132521548bc4fce1a92f7c1f"} Dec 06 09:03:56 crc kubenswrapper[4895]: I1206 09:03:56.868577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtfgv" event={"ID":"e9d9d645-1f6b-4f06-b0a3-7226d05a0199","Type":"ContainerStarted","Data":"9a1115e734e1a5f186f349de8872cb6bee7f02841eb75120d5e8f3c3797cc168"} Dec 06 09:03:56 crc kubenswrapper[4895]: I1206 09:03:56.894422 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wtfgv" podStartSLOduration=2.174373397 podStartE2EDuration="18.894400133s" podCreationTimestamp="2025-12-06 09:03:38 +0000 UTC" firstStartedPulling="2025-12-06 09:03:39.254938786 +0000 UTC m=+7581.656327656" lastFinishedPulling="2025-12-06 09:03:55.974965522 +0000 UTC m=+7598.376354392" observedRunningTime="2025-12-06 09:03:56.883800069 +0000 UTC m=+7599.285188939" watchObservedRunningTime="2025-12-06 09:03:56.894400133 +0000 UTC m=+7599.295789013" Dec 06 09:03:59 crc kubenswrapper[4895]: I1206 09:03:59.898335 4895 generic.go:334] "Generic (PLEG): container finished" podID="e9d9d645-1f6b-4f06-b0a3-7226d05a0199" containerID="9a1115e734e1a5f186f349de8872cb6bee7f02841eb75120d5e8f3c3797cc168" exitCode=0 Dec 06 09:03:59 crc kubenswrapper[4895]: I1206 09:03:59.898421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtfgv" event={"ID":"e9d9d645-1f6b-4f06-b0a3-7226d05a0199","Type":"ContainerDied","Data":"9a1115e734e1a5f186f349de8872cb6bee7f02841eb75120d5e8f3c3797cc168"} Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.491971 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtfgv" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.675991 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-db-sync-config-data\") pod \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.676048 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-config-data\") pod \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.676139 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcg7\" (UniqueName: \"kubernetes.io/projected/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-kube-api-access-bqcg7\") pod \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.676166 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-combined-ca-bundle\") pod \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\" (UID: \"e9d9d645-1f6b-4f06-b0a3-7226d05a0199\") " Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.682911 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e9d9d645-1f6b-4f06-b0a3-7226d05a0199" (UID: "e9d9d645-1f6b-4f06-b0a3-7226d05a0199"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.683000 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-kube-api-access-bqcg7" (OuterVolumeSpecName: "kube-api-access-bqcg7") pod "e9d9d645-1f6b-4f06-b0a3-7226d05a0199" (UID: "e9d9d645-1f6b-4f06-b0a3-7226d05a0199"). InnerVolumeSpecName "kube-api-access-bqcg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.698635 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9d9d645-1f6b-4f06-b0a3-7226d05a0199" (UID: "e9d9d645-1f6b-4f06-b0a3-7226d05a0199"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.715145 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-config-data" (OuterVolumeSpecName: "config-data") pod "e9d9d645-1f6b-4f06-b0a3-7226d05a0199" (UID: "e9d9d645-1f6b-4f06-b0a3-7226d05a0199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.778274 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqcg7\" (UniqueName: \"kubernetes.io/projected/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-kube-api-access-bqcg7\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.778307 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.778319 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.778332 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d9d645-1f6b-4f06-b0a3-7226d05a0199-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.916656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtfgv" event={"ID":"e9d9d645-1f6b-4f06-b0a3-7226d05a0199","Type":"ContainerDied","Data":"bed4fa113a3cfee0ef3e3747ec7d54fbaecc6b06132521548bc4fce1a92f7c1f"} Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.916703 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed4fa113a3cfee0ef3e3747ec7d54fbaecc6b06132521548bc4fce1a92f7c1f" Dec 06 09:04:01 crc kubenswrapper[4895]: I1206 09:04:01.916772 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtfgv" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.329574 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d89bd9655-95z9z"] Dec 06 09:04:02 crc kubenswrapper[4895]: E1206 09:04:02.330115 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d9d645-1f6b-4f06-b0a3-7226d05a0199" containerName="glance-db-sync" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.330140 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d9d645-1f6b-4f06-b0a3-7226d05a0199" containerName="glance-db-sync" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.330376 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d9d645-1f6b-4f06-b0a3-7226d05a0199" containerName="glance-db-sync" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.331645 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.336836 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.338606 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.349971 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.350036 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.350173 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.350870 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8t2wf" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.351824 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d89bd9655-95z9z"] Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.362223 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.388944 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-nb\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389050 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389073 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-dns-svc\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389103 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-logs\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389132 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389316 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-ceph\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389428 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bw9m\" (UniqueName: \"kubernetes.io/projected/fb21cecb-580b-4a38-92d7-b7b940b68258-kube-api-access-2bw9m\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389693 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-sb\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389792 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-config\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.389908 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8gp\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-kube-api-access-ng8gp\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.436002 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.437517 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.440216 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.446655 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491366 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-logs\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491428 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-ceph\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491485 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491532 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bw9m\" (UniqueName: \"kubernetes.io/projected/fb21cecb-580b-4a38-92d7-b7b940b68258-kube-api-access-2bw9m\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-sb\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491622 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-config\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491652 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6zw\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-kube-api-access-md6zw\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491710 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8gp\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-kube-api-access-ng8gp\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491742 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-nb\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491843 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491897 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-dns-svc\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491929 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-logs\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.491956 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.492645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.493019 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-nb\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.493039 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-sb\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.493376 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-dns-svc\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.493558 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-logs\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.493938 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-config\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.498854 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.511134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.511258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-ceph\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.511508 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.513651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8gp\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-kube-api-access-ng8gp\") pod \"glance-default-external-api-0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.514815 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bw9m\" (UniqueName: \"kubernetes.io/projected/fb21cecb-580b-4a38-92d7-b7b940b68258-kube-api-access-2bw9m\") pod \"dnsmasq-dns-6d89bd9655-95z9z\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592426 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md6zw\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-kube-api-access-md6zw\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592530 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592557 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-logs\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592600 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.592629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.593230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-logs\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.593268 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.597141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.597803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.599969 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.600316 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.609700 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md6zw\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-kube-api-access-md6zw\") pod \"glance-default-internal-api-0\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.648631 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.658533 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:04:02 crc kubenswrapper[4895]: I1206 09:04:02.757195 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.214974 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d89bd9655-95z9z"] Dec 06 09:04:03 crc kubenswrapper[4895]: W1206 09:04:03.220996 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb21cecb_580b_4a38_92d7_b7b940b68258.slice/crio-32ea40efb97e2ff0d95997c474c4108c11787895c417e07756eda816e31bd418 WatchSource:0}: Error finding container 32ea40efb97e2ff0d95997c474c4108c11787895c417e07756eda816e31bd418: Status 404 returned error can't find the container with id 32ea40efb97e2ff0d95997c474c4108c11787895c417e07756eda816e31bd418 Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.428217 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:03 crc kubenswrapper[4895]: W1206 09:04:03.493550 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75a4cc6c_68a1_46bb_9b34_df092dff29c0.slice/crio-92a33703bb9668b8f01d62d32921024bbc52ceb9e73ca46a35e40f5749b90d5c WatchSource:0}: Error finding container 92a33703bb9668b8f01d62d32921024bbc52ceb9e73ca46a35e40f5749b90d5c: Status 404 returned error can't find the container with id 92a33703bb9668b8f01d62d32921024bbc52ceb9e73ca46a35e40f5749b90d5c Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.525930 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:03 crc kubenswrapper[4895]: W1206 09:04:03.536572 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebc4456a_9ab9_4cc0_b4de_91a60f242353.slice/crio-c663e94546ffc7375cfb069b51c307cef747ba488cad3ede89a12c7178ca8a18 WatchSource:0}: Error finding container c663e94546ffc7375cfb069b51c307cef747ba488cad3ede89a12c7178ca8a18: Status 404 returned error can't find the container with id c663e94546ffc7375cfb069b51c307cef747ba488cad3ede89a12c7178ca8a18 Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.704805 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.964213 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a4cc6c-68a1-46bb-9b34-df092dff29c0","Type":"ContainerStarted","Data":"92a33703bb9668b8f01d62d32921024bbc52ceb9e73ca46a35e40f5749b90d5c"} Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.969534 4895 generic.go:334] "Generic (PLEG): container finished" podID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerID="fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f" exitCode=0 Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.969598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" event={"ID":"fb21cecb-580b-4a38-92d7-b7b940b68258","Type":"ContainerDied","Data":"fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f"} Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.969633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" event={"ID":"fb21cecb-580b-4a38-92d7-b7b940b68258","Type":"ContainerStarted","Data":"32ea40efb97e2ff0d95997c474c4108c11787895c417e07756eda816e31bd418"} Dec 06 09:04:03 crc kubenswrapper[4895]: I1206 09:04:03.973351 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebc4456a-9ab9-4cc0-b4de-91a60f242353","Type":"ContainerStarted","Data":"c663e94546ffc7375cfb069b51c307cef747ba488cad3ede89a12c7178ca8a18"} Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.984222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a4cc6c-68a1-46bb-9b34-df092dff29c0","Type":"ContainerStarted","Data":"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02"} Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.990025 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.984350 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-log" containerID="cri-o://766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e" gracePeriod=30 Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.984443 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-httpd" containerID="cri-o://e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02" gracePeriod=30 Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.990051 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a4cc6c-68a1-46bb-9b34-df092dff29c0","Type":"ContainerStarted","Data":"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e"} Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.990168 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" event={"ID":"fb21cecb-580b-4a38-92d7-b7b940b68258","Type":"ContainerStarted","Data":"4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3"} Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.990199 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebc4456a-9ab9-4cc0-b4de-91a60f242353","Type":"ContainerStarted","Data":"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592"} Dec 06 09:04:04 crc kubenswrapper[4895]: I1206 09:04:04.990233 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebc4456a-9ab9-4cc0-b4de-91a60f242353","Type":"ContainerStarted","Data":"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb"} Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.007753 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.00773167 podStartE2EDuration="3.00773167s" podCreationTimestamp="2025-12-06 09:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:05.002366615 +0000 UTC m=+7607.403755485" watchObservedRunningTime="2025-12-06 09:04:05.00773167 +0000 UTC m=+7607.409120540" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.054441 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.054395033 podStartE2EDuration="3.054395033s" podCreationTimestamp="2025-12-06 09:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:05.051590538 +0000 UTC m=+7607.452979438" watchObservedRunningTime="2025-12-06 09:04:05.054395033 +0000 UTC m=+7607.455783903" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.061457 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" podStartSLOduration=3.061436892 podStartE2EDuration="3.061436892s" podCreationTimestamp="2025-12-06 09:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:05.032939847 +0000 UTC m=+7607.434328717" watchObservedRunningTime="2025-12-06 09:04:05.061436892 +0000 UTC m=+7607.462825762" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.431474 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.744726 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.851833 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-httpd-run\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.851895 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng8gp\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-kube-api-access-ng8gp\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.851979 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-config-data\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852005 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-scripts\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852035 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-logs\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852098 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-combined-ca-bundle\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852134 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-ceph\") pod \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\" (UID: \"75a4cc6c-68a1-46bb-9b34-df092dff29c0\") " Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852461 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852635 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.852909 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-logs" (OuterVolumeSpecName: "logs") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.857556 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-ceph" (OuterVolumeSpecName: "ceph") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.857895 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-scripts" (OuterVolumeSpecName: "scripts") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:05 crc kubenswrapper[4895]: I1206 09:04:05.857928 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-kube-api-access-ng8gp" (OuterVolumeSpecName: "kube-api-access-ng8gp") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "kube-api-access-ng8gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.578803 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng8gp\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-kube-api-access-ng8gp\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.579088 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.579101 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a4cc6c-68a1-46bb-9b34-df092dff29c0-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.579112 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75a4cc6c-68a1-46bb-9b34-df092dff29c0-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.604882 4895 generic.go:334] "Generic (PLEG): container finished" podID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerID="e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02" exitCode=0 Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.604921 4895 generic.go:334] "Generic (PLEG): container finished" podID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerID="766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e" exitCode=143 Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.605275 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.608877 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.649715 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-config-data" (OuterVolumeSpecName: "config-data") pod "75a4cc6c-68a1-46bb-9b34-df092dff29c0" (UID: "75a4cc6c-68a1-46bb-9b34-df092dff29c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.680554 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.680819 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a4cc6c-68a1-46bb-9b34-df092dff29c0-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.723431 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a4cc6c-68a1-46bb-9b34-df092dff29c0","Type":"ContainerDied","Data":"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02"} Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.723503 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a4cc6c-68a1-46bb-9b34-df092dff29c0","Type":"ContainerDied","Data":"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e"} Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.723516 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a4cc6c-68a1-46bb-9b34-df092dff29c0","Type":"ContainerDied","Data":"92a33703bb9668b8f01d62d32921024bbc52ceb9e73ca46a35e40f5749b90d5c"} Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.723534 4895 scope.go:117] "RemoveContainer" containerID="e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.743828 4895 scope.go:117] "RemoveContainer" containerID="766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.765956 4895 scope.go:117] "RemoveContainer" containerID="e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02" Dec 06 09:04:06 crc kubenswrapper[4895]: E1206 09:04:06.766619 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02\": container with ID starting with e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02 not found: ID does not exist" containerID="e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.766694 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02"} err="failed to get container status \"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02\": rpc error: code = NotFound desc = could not find container \"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02\": container with ID starting with e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02 not found: ID does not exist" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.766736 4895 scope.go:117] "RemoveContainer" containerID="766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e" Dec 06 09:04:06 crc kubenswrapper[4895]: E1206 09:04:06.767034 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e\": container with ID starting with 766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e not found: ID does not exist" containerID="766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.767064 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e"} err="failed to get container status \"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e\": rpc error: code = NotFound desc = could not find container \"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e\": container with ID starting with 766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e not found: ID does not exist" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.767086 4895 scope.go:117] "RemoveContainer" containerID="e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.767270 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02"} err="failed to get container status \"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02\": rpc error: code = NotFound desc = could not find container \"e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02\": container with ID starting with e014e88baf96cc59183baa9acb2e254e449d034d3297cb7abe21477ee1d84b02 not found: ID does not exist" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.767298 4895 scope.go:117] "RemoveContainer" containerID="766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.767827 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e"} err="failed to get container status \"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e\": rpc error: code = NotFound desc = could not find container \"766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e\": container with ID starting with 766e22e7049e829aed84c17f8b5edc562373430b416b396805f7fcdab1e1217e not found: ID does not exist" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.962189 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.980683 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.990928 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:06 crc kubenswrapper[4895]: E1206 09:04:06.991254 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-log" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.991271 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-log" Dec 06 09:04:06 crc kubenswrapper[4895]: E1206 09:04:06.991288 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-httpd" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.991294 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-httpd" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.991508 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-httpd" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.991525 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" containerName="glance-log" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.992460 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:04:06 crc kubenswrapper[4895]: I1206 09:04:06.994349 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.014569 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.090803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.090891 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-logs\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.090954 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.090983 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvptq\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-kube-api-access-mvptq\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.091115 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-ceph\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.091192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-config-data\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.091301 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-scripts\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.193734 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-config-data\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.193860 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-scripts\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.193927 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.193993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-logs\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.194058 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.194097 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvptq\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-kube-api-access-mvptq\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.194164 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-ceph\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.194847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-logs\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.194957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.199328 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-ceph\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.199428 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.199868 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-scripts\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.201075 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-config-data\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.210820 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvptq\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-kube-api-access-mvptq\") pod \"glance-default-external-api-0\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.314967 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.616398 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-log" containerID="cri-o://1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb" gracePeriod=30 Dec 06 09:04:07 crc kubenswrapper[4895]: I1206 09:04:07.618058 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-httpd" containerID="cri-o://182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592" gracePeriod=30 Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.064963 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a4cc6c-68a1-46bb-9b34-df092dff29c0" path="/var/lib/kubelet/pods/75a4cc6c-68a1-46bb-9b34-df092dff29c0/volumes" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.340645 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517427 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-httpd-run\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517627 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md6zw\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-kube-api-access-md6zw\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517712 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-scripts\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517785 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-combined-ca-bundle\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517881 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-ceph\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517907 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-config-data\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.517952 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-logs\") pod \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\" (UID: \"ebc4456a-9ab9-4cc0-b4de-91a60f242353\") " Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.518539 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-logs" (OuterVolumeSpecName: "logs") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.518633 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.524715 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-ceph" (OuterVolumeSpecName: "ceph") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.525374 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-kube-api-access-md6zw" (OuterVolumeSpecName: "kube-api-access-md6zw") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "kube-api-access-md6zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.536712 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-scripts" (OuterVolumeSpecName: "scripts") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.553519 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.584450 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-config-data" (OuterVolumeSpecName: "config-data") pod "ebc4456a-9ab9-4cc0-b4de-91a60f242353" (UID: "ebc4456a-9ab9-4cc0-b4de-91a60f242353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.619395 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.620383 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md6zw\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-kube-api-access-md6zw\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.620450 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.620521 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.620659 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebc4456a-9ab9-4cc0-b4de-91a60f242353-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.620720 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebc4456a-9ab9-4cc0-b4de-91a60f242353-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.620770 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebc4456a-9ab9-4cc0-b4de-91a60f242353-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626043 4895 generic.go:334] "Generic (PLEG): container finished" podID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerID="182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592" exitCode=0 Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626078 4895 generic.go:334] "Generic (PLEG): container finished" podID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerID="1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb" exitCode=143 Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626102 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebc4456a-9ab9-4cc0-b4de-91a60f242353","Type":"ContainerDied","Data":"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592"} Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebc4456a-9ab9-4cc0-b4de-91a60f242353","Type":"ContainerDied","Data":"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb"} Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626150 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebc4456a-9ab9-4cc0-b4de-91a60f242353","Type":"ContainerDied","Data":"c663e94546ffc7375cfb069b51c307cef747ba488cad3ede89a12c7178ca8a18"} Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626170 4895 scope.go:117] "RemoveContainer" containerID="182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.626308 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.648966 4895 scope.go:117] "RemoveContainer" containerID="1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.678000 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.692867 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.702981 4895 scope.go:117] "RemoveContainer" containerID="182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592" Dec 06 09:04:08 crc kubenswrapper[4895]: E1206 09:04:08.703986 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592\": container with ID starting with 182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592 not found: ID does not exist" containerID="182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.704034 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592"} err="failed to get container status \"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592\": rpc error: code = NotFound desc = could not find container \"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592\": container with ID starting with 182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592 not found: ID does not exist" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.704064 4895 scope.go:117] "RemoveContainer" containerID="1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb" Dec 06 09:04:08 crc kubenswrapper[4895]: E1206 09:04:08.704340 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb\": container with ID starting with 1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb not found: ID does not exist" containerID="1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.704364 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb"} err="failed to get container status \"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb\": rpc error: code = NotFound desc = could not find container \"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb\": container with ID starting with 1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb not found: ID does not exist" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.704378 4895 scope.go:117] "RemoveContainer" containerID="182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.704978 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592"} err="failed to get container status \"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592\": rpc error: code = NotFound desc = could not find container \"182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592\": container with ID starting with 182687da2b19d9d14e32fd7ee53e1c50fcaacb24a0808583e67f8e0498493592 not found: ID does not exist" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.705000 4895 scope.go:117] "RemoveContainer" containerID="1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.705428 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb"} err="failed to get container status \"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb\": rpc error: code = NotFound desc = could not find container \"1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb\": container with ID starting with 1a0530b98be03675733e2c0f4cc68561a965b942a06e6745e051b7b6950a06bb not found: ID does not exist" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.707545 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:08 crc kubenswrapper[4895]: E1206 09:04:08.708008 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-log" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.708074 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-log" Dec 06 09:04:08 crc kubenswrapper[4895]: E1206 09:04:08.708127 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-httpd" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.708186 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-httpd" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.708412 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-httpd" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.708518 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" containerName="glance-log" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.709668 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.716860 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721263 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721321 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721359 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8zx5\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-kube-api-access-d8zx5\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721436 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-logs\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721656 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.721709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823149 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823225 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823253 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823282 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8zx5\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-kube-api-access-d8zx5\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823313 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-logs\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.823361 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.824144 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.824184 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-logs\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.827363 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.827487 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.827562 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.827744 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.839081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8zx5\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-kube-api-access-d8zx5\") pod \"glance-default-internal-api-0\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:04:08 crc kubenswrapper[4895]: I1206 09:04:08.903024 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:04:09 crc kubenswrapper[4895]: I1206 09:04:09.029525 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:09 crc kubenswrapper[4895]: I1206 09:04:09.645106 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09a717e6-54eb-4b96-903d-7f4c4eb4bb96","Type":"ContainerStarted","Data":"dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32"} Dec 06 09:04:09 crc kubenswrapper[4895]: I1206 09:04:09.645599 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09a717e6-54eb-4b96-903d-7f4c4eb4bb96","Type":"ContainerStarted","Data":"585abe516277927de89a39ae8c3fc3226d1a491751e77a18a48333d6d4a5a84b"} Dec 06 09:04:09 crc kubenswrapper[4895]: I1206 09:04:09.674348 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:04:10 crc kubenswrapper[4895]: I1206 09:04:10.064779 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc4456a-9ab9-4cc0-b4de-91a60f242353" path="/var/lib/kubelet/pods/ebc4456a-9ab9-4cc0-b4de-91a60f242353/volumes" Dec 06 09:04:10 crc kubenswrapper[4895]: I1206 09:04:10.657531 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09a717e6-54eb-4b96-903d-7f4c4eb4bb96","Type":"ContainerStarted","Data":"6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d"} Dec 06 09:04:10 crc kubenswrapper[4895]: I1206 09:04:10.659053 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a81b9697-f347-4805-99b1-eb21de602965","Type":"ContainerStarted","Data":"590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369"} Dec 06 09:04:10 crc kubenswrapper[4895]: I1206 09:04:10.659076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a81b9697-f347-4805-99b1-eb21de602965","Type":"ContainerStarted","Data":"6d96b32cd64f7ba9e6e6c588c6fa2d51c057d65cc9f672ff33826ddc8a7efda4"} Dec 06 09:04:10 crc kubenswrapper[4895]: I1206 09:04:10.676876 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.676856789 podStartE2EDuration="4.676856789s" podCreationTimestamp="2025-12-06 09:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:10.67580331 +0000 UTC m=+7613.077192200" watchObservedRunningTime="2025-12-06 09:04:10.676856789 +0000 UTC m=+7613.078245659" Dec 06 09:04:11 crc kubenswrapper[4895]: I1206 09:04:11.675543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a81b9697-f347-4805-99b1-eb21de602965","Type":"ContainerStarted","Data":"8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683"} Dec 06 09:04:11 crc kubenswrapper[4895]: I1206 09:04:11.706205 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.706180793 podStartE2EDuration="3.706180793s" podCreationTimestamp="2025-12-06 09:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:11.696178654 +0000 UTC m=+7614.097567564" watchObservedRunningTime="2025-12-06 09:04:11.706180793 +0000 UTC m=+7614.107569663" Dec 06 09:04:12 crc kubenswrapper[4895]: I1206 09:04:12.651664 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:12 crc kubenswrapper[4895]: I1206 09:04:12.718423 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c55bc4bd7-m5ng5"] Dec 06 09:04:12 crc kubenswrapper[4895]: I1206 09:04:12.718699 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerName="dnsmasq-dns" containerID="cri-o://0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd" gracePeriod=10 Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.378113 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.511974 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-sb\") pod \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.512035 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr48j\" (UniqueName: \"kubernetes.io/projected/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-kube-api-access-lr48j\") pod \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.512094 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-nb\") pod \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.512214 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-dns-svc\") pod \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.512266 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-config\") pod \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\" (UID: \"3fece91d-36f5-46c4-a2d1-17820a9e9bd6\") " Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.518369 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-kube-api-access-lr48j" (OuterVolumeSpecName: "kube-api-access-lr48j") pod "3fece91d-36f5-46c4-a2d1-17820a9e9bd6" (UID: "3fece91d-36f5-46c4-a2d1-17820a9e9bd6"). InnerVolumeSpecName "kube-api-access-lr48j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.557967 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-config" (OuterVolumeSpecName: "config") pod "3fece91d-36f5-46c4-a2d1-17820a9e9bd6" (UID: "3fece91d-36f5-46c4-a2d1-17820a9e9bd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.557989 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fece91d-36f5-46c4-a2d1-17820a9e9bd6" (UID: "3fece91d-36f5-46c4-a2d1-17820a9e9bd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.570625 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fece91d-36f5-46c4-a2d1-17820a9e9bd6" (UID: "3fece91d-36f5-46c4-a2d1-17820a9e9bd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.578076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fece91d-36f5-46c4-a2d1-17820a9e9bd6" (UID: "3fece91d-36f5-46c4-a2d1-17820a9e9bd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.614434 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.614467 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.614497 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.614509 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr48j\" (UniqueName: \"kubernetes.io/projected/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-kube-api-access-lr48j\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.614517 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fece91d-36f5-46c4-a2d1-17820a9e9bd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.698651 4895 generic.go:334] "Generic (PLEG): container finished" podID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerID="0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd" exitCode=0 Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.698685 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.698703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" event={"ID":"3fece91d-36f5-46c4-a2d1-17820a9e9bd6","Type":"ContainerDied","Data":"0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd"} Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.698753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55bc4bd7-m5ng5" event={"ID":"3fece91d-36f5-46c4-a2d1-17820a9e9bd6","Type":"ContainerDied","Data":"83a390d9a67e9a1fc2d18f0428328585c1da0566f0e5968e14464458d5ffce05"} Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.698773 4895 scope.go:117] "RemoveContainer" containerID="0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.728459 4895 scope.go:117] "RemoveContainer" containerID="fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.746055 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c55bc4bd7-m5ng5"] Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.755580 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c55bc4bd7-m5ng5"] Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.769206 4895 scope.go:117] "RemoveContainer" containerID="0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd" Dec 06 09:04:13 crc kubenswrapper[4895]: E1206 09:04:13.769896 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd\": container with ID starting with 0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd not found: ID does not exist" containerID="0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.769952 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd"} err="failed to get container status \"0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd\": rpc error: code = NotFound desc = could not find container \"0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd\": container with ID starting with 0ec7025e532f227a5a66bef763bd7e8122c8bcb11cc3cba81eecbda21db1f2cd not found: ID does not exist" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.769991 4895 scope.go:117] "RemoveContainer" containerID="fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d" Dec 06 09:04:13 crc kubenswrapper[4895]: E1206 09:04:13.770393 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d\": container with ID starting with fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d not found: ID does not exist" containerID="fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d" Dec 06 09:04:13 crc kubenswrapper[4895]: I1206 09:04:13.770446 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d"} err="failed to get container status \"fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d\": rpc error: code = NotFound desc = could not find container \"fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d\": container with ID starting with fc3276daef2f3fc664efc25e2c37bc01e63af61d9c184ee72e08a9653326434d not found: ID does not exist" Dec 06 09:04:14 crc kubenswrapper[4895]: I1206 09:04:14.063692 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" path="/var/lib/kubelet/pods/3fece91d-36f5-46c4-a2d1-17820a9e9bd6/volumes" Dec 06 09:04:15 crc kubenswrapper[4895]: I1206 09:04:15.208570 4895 scope.go:117] "RemoveContainer" containerID="bcdda53a7eca4ece6e3543e7d0f7d5207bae867f44dacc5c45852c87b77ea3bc" Dec 06 09:04:15 crc kubenswrapper[4895]: I1206 09:04:15.241648 4895 scope.go:117] "RemoveContainer" containerID="7ca47d46dbe4c29b8fb7df307baab144c7f965d5fe5dd043866fcedf0e4a52fe" Dec 06 09:04:15 crc kubenswrapper[4895]: I1206 09:04:15.273620 4895 scope.go:117] "RemoveContainer" containerID="8db875b5971018daf9845ab2b533639e69faa3152e086985be893c5f50c44d27" Dec 06 09:04:17 crc kubenswrapper[4895]: I1206 09:04:17.315856 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 09:04:17 crc kubenswrapper[4895]: I1206 09:04:17.316243 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 09:04:17 crc kubenswrapper[4895]: I1206 09:04:17.366417 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 09:04:17 crc kubenswrapper[4895]: I1206 09:04:17.388576 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 09:04:17 crc kubenswrapper[4895]: I1206 09:04:17.742624 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 09:04:17 crc kubenswrapper[4895]: I1206 09:04:17.742707 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.030498 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.030823 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.066169 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.100714 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.733456 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.747788 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.771835 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:19 crc kubenswrapper[4895]: I1206 09:04:19.771931 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:21 crc kubenswrapper[4895]: I1206 09:04:21.946756 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:21 crc kubenswrapper[4895]: I1206 09:04:21.947629 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 09:04:21 crc kubenswrapper[4895]: I1206 09:04:21.953370 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.431614 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5f9g5"] Dec 06 09:04:27 crc kubenswrapper[4895]: E1206 09:04:27.433039 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerName="dnsmasq-dns" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.433061 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerName="dnsmasq-dns" Dec 06 09:04:27 crc kubenswrapper[4895]: E1206 09:04:27.433094 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerName="init" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.433107 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerName="init" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.433675 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fece91d-36f5-46c4-a2d1-17820a9e9bd6" containerName="dnsmasq-dns" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.434935 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.511553 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5f9g5"] Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.545501 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4976cbe1-e91d-49d0-9901-d998b029337c-operator-scripts\") pod \"placement-db-create-5f9g5\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.545667 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjwt\" (UniqueName: \"kubernetes.io/projected/4976cbe1-e91d-49d0-9901-d998b029337c-kube-api-access-xcjwt\") pod \"placement-db-create-5f9g5\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.546511 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9c44-account-create-update-jkmz4"] Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.548092 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.549820 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.556856 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9c44-account-create-update-jkmz4"] Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.646391 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92e82a30-45be-425a-ab4e-19491702a3d3-operator-scripts\") pod \"placement-9c44-account-create-update-jkmz4\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.646486 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4976cbe1-e91d-49d0-9901-d998b029337c-operator-scripts\") pod \"placement-db-create-5f9g5\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.646569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjwt\" (UniqueName: \"kubernetes.io/projected/4976cbe1-e91d-49d0-9901-d998b029337c-kube-api-access-xcjwt\") pod \"placement-db-create-5f9g5\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.646758 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x659s\" (UniqueName: \"kubernetes.io/projected/92e82a30-45be-425a-ab4e-19491702a3d3-kube-api-access-x659s\") pod \"placement-9c44-account-create-update-jkmz4\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.647197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4976cbe1-e91d-49d0-9901-d998b029337c-operator-scripts\") pod \"placement-db-create-5f9g5\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.666193 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjwt\" (UniqueName: \"kubernetes.io/projected/4976cbe1-e91d-49d0-9901-d998b029337c-kube-api-access-xcjwt\") pod \"placement-db-create-5f9g5\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.747778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x659s\" (UniqueName: \"kubernetes.io/projected/92e82a30-45be-425a-ab4e-19491702a3d3-kube-api-access-x659s\") pod \"placement-9c44-account-create-update-jkmz4\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.748019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92e82a30-45be-425a-ab4e-19491702a3d3-operator-scripts\") pod \"placement-9c44-account-create-update-jkmz4\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.748740 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92e82a30-45be-425a-ab4e-19491702a3d3-operator-scripts\") pod \"placement-9c44-account-create-update-jkmz4\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.764074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x659s\" (UniqueName: \"kubernetes.io/projected/92e82a30-45be-425a-ab4e-19491702a3d3-kube-api-access-x659s\") pod \"placement-9c44-account-create-update-jkmz4\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.812975 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:27 crc kubenswrapper[4895]: I1206 09:04:27.874455 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:28 crc kubenswrapper[4895]: W1206 09:04:28.290539 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4976cbe1_e91d_49d0_9901_d998b029337c.slice/crio-f30b3570dd266cf60d0cea815c6b3787acfd17f98fe4c4a24cddf7962ceafb3b WatchSource:0}: Error finding container f30b3570dd266cf60d0cea815c6b3787acfd17f98fe4c4a24cddf7962ceafb3b: Status 404 returned error can't find the container with id f30b3570dd266cf60d0cea815c6b3787acfd17f98fe4c4a24cddf7962ceafb3b Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.291289 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5f9g5"] Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.405710 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9c44-account-create-update-jkmz4"] Dec 06 09:04:28 crc kubenswrapper[4895]: W1206 09:04:28.412183 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92e82a30_45be_425a_ab4e_19491702a3d3.slice/crio-a17a861016a7b95f56c1bef709a4c4d4e1276a5eb14c3388a5e9d1d878177000 WatchSource:0}: Error finding container a17a861016a7b95f56c1bef709a4c4d4e1276a5eb14c3388a5e9d1d878177000: Status 404 returned error can't find the container with id a17a861016a7b95f56c1bef709a4c4d4e1276a5eb14c3388a5e9d1d878177000 Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.944808 4895 generic.go:334] "Generic (PLEG): container finished" podID="4976cbe1-e91d-49d0-9901-d998b029337c" containerID="23181971eca138bef2408c3aada42b7a2d1e37ace7b6682a55e1b6280fbf7394" exitCode=0 Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.944945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f9g5" event={"ID":"4976cbe1-e91d-49d0-9901-d998b029337c","Type":"ContainerDied","Data":"23181971eca138bef2408c3aada42b7a2d1e37ace7b6682a55e1b6280fbf7394"} Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.945557 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f9g5" event={"ID":"4976cbe1-e91d-49d0-9901-d998b029337c","Type":"ContainerStarted","Data":"f30b3570dd266cf60d0cea815c6b3787acfd17f98fe4c4a24cddf7962ceafb3b"} Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.948182 4895 generic.go:334] "Generic (PLEG): container finished" podID="92e82a30-45be-425a-ab4e-19491702a3d3" containerID="1a5542954132af197a82ce5da3bd2449108a679c54c1eb449bdf81901571b593" exitCode=0 Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.948319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9c44-account-create-update-jkmz4" event={"ID":"92e82a30-45be-425a-ab4e-19491702a3d3","Type":"ContainerDied","Data":"1a5542954132af197a82ce5da3bd2449108a679c54c1eb449bdf81901571b593"} Dec 06 09:04:28 crc kubenswrapper[4895]: I1206 09:04:28.948536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9c44-account-create-update-jkmz4" event={"ID":"92e82a30-45be-425a-ab4e-19491702a3d3","Type":"ContainerStarted","Data":"a17a861016a7b95f56c1bef709a4c4d4e1276a5eb14c3388a5e9d1d878177000"} Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.449415 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.460846 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.512638 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92e82a30-45be-425a-ab4e-19491702a3d3-operator-scripts\") pod \"92e82a30-45be-425a-ab4e-19491702a3d3\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.512904 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x659s\" (UniqueName: \"kubernetes.io/projected/92e82a30-45be-425a-ab4e-19491702a3d3-kube-api-access-x659s\") pod \"92e82a30-45be-425a-ab4e-19491702a3d3\" (UID: \"92e82a30-45be-425a-ab4e-19491702a3d3\") " Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.513061 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjwt\" (UniqueName: \"kubernetes.io/projected/4976cbe1-e91d-49d0-9901-d998b029337c-kube-api-access-xcjwt\") pod \"4976cbe1-e91d-49d0-9901-d998b029337c\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.513097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4976cbe1-e91d-49d0-9901-d998b029337c-operator-scripts\") pod \"4976cbe1-e91d-49d0-9901-d998b029337c\" (UID: \"4976cbe1-e91d-49d0-9901-d998b029337c\") " Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.514780 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e82a30-45be-425a-ab4e-19491702a3d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92e82a30-45be-425a-ab4e-19491702a3d3" (UID: "92e82a30-45be-425a-ab4e-19491702a3d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.515489 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4976cbe1-e91d-49d0-9901-d998b029337c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4976cbe1-e91d-49d0-9901-d998b029337c" (UID: "4976cbe1-e91d-49d0-9901-d998b029337c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.519969 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e82a30-45be-425a-ab4e-19491702a3d3-kube-api-access-x659s" (OuterVolumeSpecName: "kube-api-access-x659s") pod "92e82a30-45be-425a-ab4e-19491702a3d3" (UID: "92e82a30-45be-425a-ab4e-19491702a3d3"). InnerVolumeSpecName "kube-api-access-x659s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.521617 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4976cbe1-e91d-49d0-9901-d998b029337c-kube-api-access-xcjwt" (OuterVolumeSpecName: "kube-api-access-xcjwt") pod "4976cbe1-e91d-49d0-9901-d998b029337c" (UID: "4976cbe1-e91d-49d0-9901-d998b029337c"). InnerVolumeSpecName "kube-api-access-xcjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.616572 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92e82a30-45be-425a-ab4e-19491702a3d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.616614 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x659s\" (UniqueName: \"kubernetes.io/projected/92e82a30-45be-425a-ab4e-19491702a3d3-kube-api-access-x659s\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.616625 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjwt\" (UniqueName: \"kubernetes.io/projected/4976cbe1-e91d-49d0-9901-d998b029337c-kube-api-access-xcjwt\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.616633 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4976cbe1-e91d-49d0-9901-d998b029337c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.974186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f9g5" event={"ID":"4976cbe1-e91d-49d0-9901-d998b029337c","Type":"ContainerDied","Data":"f30b3570dd266cf60d0cea815c6b3787acfd17f98fe4c4a24cddf7962ceafb3b"} Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.974266 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f30b3570dd266cf60d0cea815c6b3787acfd17f98fe4c4a24cddf7962ceafb3b" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.974227 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f9g5" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.975673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9c44-account-create-update-jkmz4" event={"ID":"92e82a30-45be-425a-ab4e-19491702a3d3","Type":"ContainerDied","Data":"a17a861016a7b95f56c1bef709a4c4d4e1276a5eb14c3388a5e9d1d878177000"} Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.975712 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17a861016a7b95f56c1bef709a4c4d4e1276a5eb14c3388a5e9d1d878177000" Dec 06 09:04:30 crc kubenswrapper[4895]: I1206 09:04:30.975726 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9c44-account-create-update-jkmz4" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.782959 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76ff5d4687-cbsc6"] Dec 06 09:04:32 crc kubenswrapper[4895]: E1206 09:04:32.783747 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4976cbe1-e91d-49d0-9901-d998b029337c" containerName="mariadb-database-create" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.783767 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4976cbe1-e91d-49d0-9901-d998b029337c" containerName="mariadb-database-create" Dec 06 09:04:32 crc kubenswrapper[4895]: E1206 09:04:32.783777 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e82a30-45be-425a-ab4e-19491702a3d3" containerName="mariadb-account-create-update" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.783784 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e82a30-45be-425a-ab4e-19491702a3d3" containerName="mariadb-account-create-update" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.783993 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4976cbe1-e91d-49d0-9901-d998b029337c" containerName="mariadb-database-create" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.784014 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e82a30-45be-425a-ab4e-19491702a3d3" containerName="mariadb-account-create-update" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.785233 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.810684 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-g69zh"] Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.811962 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.814801 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.814899 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-t6vj7" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.821839 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ff5d4687-cbsc6"] Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.826042 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858590 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858670 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-dns-svc\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858723 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-combined-ca-bundle\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-config\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-config-data\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858871 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6lt\" (UniqueName: \"kubernetes.io/projected/75653761-0f58-43f8-a412-b84731fcb7d6-kube-api-access-bt6lt\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.858981 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75653761-0f58-43f8-a412-b84731fcb7d6-logs\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.859015 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-scripts\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.859054 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmq2s\" (UniqueName: \"kubernetes.io/projected/e41a06e0-4b8a-473a-8eb9-5681761909f2-kube-api-access-lmq2s\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.868163 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-g69zh"] Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960102 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960153 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-dns-svc\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-combined-ca-bundle\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960206 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-config\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960222 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-config-data\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt6lt\" (UniqueName: \"kubernetes.io/projected/75653761-0f58-43f8-a412-b84731fcb7d6-kube-api-access-bt6lt\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75653761-0f58-43f8-a412-b84731fcb7d6-logs\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960364 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-scripts\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960398 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmq2s\" (UniqueName: \"kubernetes.io/projected/e41a06e0-4b8a-473a-8eb9-5681761909f2-kube-api-access-lmq2s\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.960914 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75653761-0f58-43f8-a412-b84731fcb7d6-logs\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.961376 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.961379 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.961381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-config\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.962127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-dns-svc\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.964999 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-scripts\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.965499 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-combined-ca-bundle\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.967345 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-config-data\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.978375 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmq2s\" (UniqueName: \"kubernetes.io/projected/e41a06e0-4b8a-473a-8eb9-5681761909f2-kube-api-access-lmq2s\") pod \"dnsmasq-dns-76ff5d4687-cbsc6\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:32 crc kubenswrapper[4895]: I1206 09:04:32.980826 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt6lt\" (UniqueName: \"kubernetes.io/projected/75653761-0f58-43f8-a412-b84731fcb7d6-kube-api-access-bt6lt\") pod \"placement-db-sync-g69zh\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:33 crc kubenswrapper[4895]: I1206 09:04:33.102859 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:33 crc kubenswrapper[4895]: I1206 09:04:33.129538 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:33 crc kubenswrapper[4895]: I1206 09:04:33.556527 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ff5d4687-cbsc6"] Dec 06 09:04:33 crc kubenswrapper[4895]: W1206 09:04:33.663534 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75653761_0f58_43f8_a412_b84731fcb7d6.slice/crio-f423dc07af8bf1df32c2b32ce8913104c276d1aad11ae960b65ded53d7e9392d WatchSource:0}: Error finding container f423dc07af8bf1df32c2b32ce8913104c276d1aad11ae960b65ded53d7e9392d: Status 404 returned error can't find the container with id f423dc07af8bf1df32c2b32ce8913104c276d1aad11ae960b65ded53d7e9392d Dec 06 09:04:33 crc kubenswrapper[4895]: I1206 09:04:33.671799 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-g69zh"] Dec 06 09:04:34 crc kubenswrapper[4895]: I1206 09:04:34.020034 4895 generic.go:334] "Generic (PLEG): container finished" podID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerID="5649b679f0f020742fce6be012928a71cd0d285569df520e791ad16884c73e65" exitCode=0 Dec 06 09:04:34 crc kubenswrapper[4895]: I1206 09:04:34.020117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" event={"ID":"e41a06e0-4b8a-473a-8eb9-5681761909f2","Type":"ContainerDied","Data":"5649b679f0f020742fce6be012928a71cd0d285569df520e791ad16884c73e65"} Dec 06 09:04:34 crc kubenswrapper[4895]: I1206 09:04:34.020150 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" event={"ID":"e41a06e0-4b8a-473a-8eb9-5681761909f2","Type":"ContainerStarted","Data":"ca75a68b98be4334755c0e35396d598b66772dba85205c39f6dda7c4d59ec8f0"} Dec 06 09:04:34 crc kubenswrapper[4895]: I1206 09:04:34.023667 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g69zh" event={"ID":"75653761-0f58-43f8-a412-b84731fcb7d6","Type":"ContainerStarted","Data":"f423dc07af8bf1df32c2b32ce8913104c276d1aad11ae960b65ded53d7e9392d"} Dec 06 09:04:35 crc kubenswrapper[4895]: I1206 09:04:35.035998 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" event={"ID":"e41a06e0-4b8a-473a-8eb9-5681761909f2","Type":"ContainerStarted","Data":"3df3af1f372e58295549dffc35932253860465b0a91f62995c3e4a5b8b9eb924"} Dec 06 09:04:35 crc kubenswrapper[4895]: I1206 09:04:35.036662 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:35 crc kubenswrapper[4895]: I1206 09:04:35.053707 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" podStartSLOduration=3.053684284 podStartE2EDuration="3.053684284s" podCreationTimestamp="2025-12-06 09:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:35.053485029 +0000 UTC m=+7637.454873899" watchObservedRunningTime="2025-12-06 09:04:35.053684284 +0000 UTC m=+7637.455073154" Dec 06 09:04:38 crc kubenswrapper[4895]: I1206 09:04:38.075707 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g69zh" event={"ID":"75653761-0f58-43f8-a412-b84731fcb7d6","Type":"ContainerStarted","Data":"3d790c8dce46674c9701bfa1c5117af3b38a03df3df8c56bc5ef1bc84e94d6a5"} Dec 06 09:04:38 crc kubenswrapper[4895]: I1206 09:04:38.108987 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-g69zh" podStartSLOduration=2.742648028 podStartE2EDuration="6.108967518s" podCreationTimestamp="2025-12-06 09:04:32 +0000 UTC" firstStartedPulling="2025-12-06 09:04:33.668790567 +0000 UTC m=+7636.070179437" lastFinishedPulling="2025-12-06 09:04:37.035110057 +0000 UTC m=+7639.436498927" observedRunningTime="2025-12-06 09:04:38.105136386 +0000 UTC m=+7640.506525256" watchObservedRunningTime="2025-12-06 09:04:38.108967518 +0000 UTC m=+7640.510356378" Dec 06 09:04:39 crc kubenswrapper[4895]: I1206 09:04:39.084561 4895 generic.go:334] "Generic (PLEG): container finished" podID="75653761-0f58-43f8-a412-b84731fcb7d6" containerID="3d790c8dce46674c9701bfa1c5117af3b38a03df3df8c56bc5ef1bc84e94d6a5" exitCode=0 Dec 06 09:04:39 crc kubenswrapper[4895]: I1206 09:04:39.084647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g69zh" event={"ID":"75653761-0f58-43f8-a412-b84731fcb7d6","Type":"ContainerDied","Data":"3d790c8dce46674c9701bfa1c5117af3b38a03df3df8c56bc5ef1bc84e94d6a5"} Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.532009 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.665108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt6lt\" (UniqueName: \"kubernetes.io/projected/75653761-0f58-43f8-a412-b84731fcb7d6-kube-api-access-bt6lt\") pod \"75653761-0f58-43f8-a412-b84731fcb7d6\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.665184 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-combined-ca-bundle\") pod \"75653761-0f58-43f8-a412-b84731fcb7d6\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.665249 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75653761-0f58-43f8-a412-b84731fcb7d6-logs\") pod \"75653761-0f58-43f8-a412-b84731fcb7d6\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.665462 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-scripts\") pod \"75653761-0f58-43f8-a412-b84731fcb7d6\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.665581 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-config-data\") pod \"75653761-0f58-43f8-a412-b84731fcb7d6\" (UID: \"75653761-0f58-43f8-a412-b84731fcb7d6\") " Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.666076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75653761-0f58-43f8-a412-b84731fcb7d6-logs" (OuterVolumeSpecName: "logs") pod "75653761-0f58-43f8-a412-b84731fcb7d6" (UID: "75653761-0f58-43f8-a412-b84731fcb7d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.666395 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75653761-0f58-43f8-a412-b84731fcb7d6-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.673684 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-scripts" (OuterVolumeSpecName: "scripts") pod "75653761-0f58-43f8-a412-b84731fcb7d6" (UID: "75653761-0f58-43f8-a412-b84731fcb7d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.680302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75653761-0f58-43f8-a412-b84731fcb7d6-kube-api-access-bt6lt" (OuterVolumeSpecName: "kube-api-access-bt6lt") pod "75653761-0f58-43f8-a412-b84731fcb7d6" (UID: "75653761-0f58-43f8-a412-b84731fcb7d6"). InnerVolumeSpecName "kube-api-access-bt6lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.696298 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-config-data" (OuterVolumeSpecName: "config-data") pod "75653761-0f58-43f8-a412-b84731fcb7d6" (UID: "75653761-0f58-43f8-a412-b84731fcb7d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.724412 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75653761-0f58-43f8-a412-b84731fcb7d6" (UID: "75653761-0f58-43f8-a412-b84731fcb7d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.768366 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.768402 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.768416 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt6lt\" (UniqueName: \"kubernetes.io/projected/75653761-0f58-43f8-a412-b84731fcb7d6-kube-api-access-bt6lt\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:40 crc kubenswrapper[4895]: I1206 09:04:40.768429 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75653761-0f58-43f8-a412-b84731fcb7d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.106120 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g69zh" event={"ID":"75653761-0f58-43f8-a412-b84731fcb7d6","Type":"ContainerDied","Data":"f423dc07af8bf1df32c2b32ce8913104c276d1aad11ae960b65ded53d7e9392d"} Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.106176 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f423dc07af8bf1df32c2b32ce8913104c276d1aad11ae960b65ded53d7e9392d" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.106204 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g69zh" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.309760 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77b5c8f5cb-5gctx"] Dec 06 09:04:41 crc kubenswrapper[4895]: E1206 09:04:41.310680 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75653761-0f58-43f8-a412-b84731fcb7d6" containerName="placement-db-sync" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.310722 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75653761-0f58-43f8-a412-b84731fcb7d6" containerName="placement-db-sync" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.311244 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75653761-0f58-43f8-a412-b84731fcb7d6" containerName="placement-db-sync" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.313632 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.316162 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.316453 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-t6vj7" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.317608 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.325294 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77b5c8f5cb-5gctx"] Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.381223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-combined-ca-bundle\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.381289 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fnxx\" (UniqueName: \"kubernetes.io/projected/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-kube-api-access-8fnxx\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.381331 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-config-data\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.381529 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-logs\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.381834 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-scripts\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.484055 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-logs\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.484241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-scripts\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.484314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-combined-ca-bundle\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.484341 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fnxx\" (UniqueName: \"kubernetes.io/projected/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-kube-api-access-8fnxx\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.484365 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-config-data\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.484608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-logs\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.487971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-scripts\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.488421 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-combined-ca-bundle\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.499438 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-config-data\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.501675 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fnxx\" (UniqueName: \"kubernetes.io/projected/58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e-kube-api-access-8fnxx\") pod \"placement-77b5c8f5cb-5gctx\" (UID: \"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e\") " pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:41 crc kubenswrapper[4895]: I1206 09:04:41.644056 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:42 crc kubenswrapper[4895]: I1206 09:04:42.136870 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77b5c8f5cb-5gctx"] Dec 06 09:04:42 crc kubenswrapper[4895]: W1206 09:04:42.137416 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c16e8e_bcc4_4467_a3a1_1a3e8131ba8e.slice/crio-42235a3b9589e4e8589132951e130eaa6b0a461080781a9e48de4a1b733e4bd1 WatchSource:0}: Error finding container 42235a3b9589e4e8589132951e130eaa6b0a461080781a9e48de4a1b733e4bd1: Status 404 returned error can't find the container with id 42235a3b9589e4e8589132951e130eaa6b0a461080781a9e48de4a1b733e4bd1 Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.105786 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.158846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77b5c8f5cb-5gctx" event={"ID":"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e","Type":"ContainerStarted","Data":"2017092a0bf2f0f9e0d410cd43476eaafeb52e912effca73ec4738b02cacd4e8"} Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.159406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77b5c8f5cb-5gctx" event={"ID":"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e","Type":"ContainerStarted","Data":"4da9f4639a27bf8ff1bae48ee094b839853cf9f445cc72847a8f117739e34086"} Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.159561 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.159595 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77b5c8f5cb-5gctx" event={"ID":"58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e","Type":"ContainerStarted","Data":"42235a3b9589e4e8589132951e130eaa6b0a461080781a9e48de4a1b733e4bd1"} Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.159628 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.199334 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d89bd9655-95z9z"] Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.199677 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerName="dnsmasq-dns" containerID="cri-o://4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3" gracePeriod=10 Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.208105 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77b5c8f5cb-5gctx" podStartSLOduration=2.208083244 podStartE2EDuration="2.208083244s" podCreationTimestamp="2025-12-06 09:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:04:43.184347296 +0000 UTC m=+7645.585736216" watchObservedRunningTime="2025-12-06 09:04:43.208083244 +0000 UTC m=+7645.609472124" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.684696 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.838467 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-nb\") pod \"fb21cecb-580b-4a38-92d7-b7b940b68258\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.838569 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-sb\") pod \"fb21cecb-580b-4a38-92d7-b7b940b68258\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.838671 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bw9m\" (UniqueName: \"kubernetes.io/projected/fb21cecb-580b-4a38-92d7-b7b940b68258-kube-api-access-2bw9m\") pod \"fb21cecb-580b-4a38-92d7-b7b940b68258\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.838753 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-dns-svc\") pod \"fb21cecb-580b-4a38-92d7-b7b940b68258\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.838865 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-config\") pod \"fb21cecb-580b-4a38-92d7-b7b940b68258\" (UID: \"fb21cecb-580b-4a38-92d7-b7b940b68258\") " Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.843936 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb21cecb-580b-4a38-92d7-b7b940b68258-kube-api-access-2bw9m" (OuterVolumeSpecName: "kube-api-access-2bw9m") pod "fb21cecb-580b-4a38-92d7-b7b940b68258" (UID: "fb21cecb-580b-4a38-92d7-b7b940b68258"). InnerVolumeSpecName "kube-api-access-2bw9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.882877 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-config" (OuterVolumeSpecName: "config") pod "fb21cecb-580b-4a38-92d7-b7b940b68258" (UID: "fb21cecb-580b-4a38-92d7-b7b940b68258"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.888465 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb21cecb-580b-4a38-92d7-b7b940b68258" (UID: "fb21cecb-580b-4a38-92d7-b7b940b68258"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.889996 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb21cecb-580b-4a38-92d7-b7b940b68258" (UID: "fb21cecb-580b-4a38-92d7-b7b940b68258"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.894425 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb21cecb-580b-4a38-92d7-b7b940b68258" (UID: "fb21cecb-580b-4a38-92d7-b7b940b68258"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.941181 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.941218 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.941233 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.941248 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bw9m\" (UniqueName: \"kubernetes.io/projected/fb21cecb-580b-4a38-92d7-b7b940b68258-kube-api-access-2bw9m\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:43 crc kubenswrapper[4895]: I1206 09:04:43.941260 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb21cecb-580b-4a38-92d7-b7b940b68258-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.171232 4895 generic.go:334] "Generic (PLEG): container finished" podID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerID="4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3" exitCode=0 Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.171301 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" event={"ID":"fb21cecb-580b-4a38-92d7-b7b940b68258","Type":"ContainerDied","Data":"4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3"} Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.171363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" event={"ID":"fb21cecb-580b-4a38-92d7-b7b940b68258","Type":"ContainerDied","Data":"32ea40efb97e2ff0d95997c474c4108c11787895c417e07756eda816e31bd418"} Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.171392 4895 scope.go:117] "RemoveContainer" containerID="4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.171646 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d89bd9655-95z9z" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.198977 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d89bd9655-95z9z"] Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.199046 4895 scope.go:117] "RemoveContainer" containerID="fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.209059 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d89bd9655-95z9z"] Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.223061 4895 scope.go:117] "RemoveContainer" containerID="4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3" Dec 06 09:04:44 crc kubenswrapper[4895]: E1206 09:04:44.223609 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3\": container with ID starting with 4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3 not found: ID does not exist" containerID="4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.223672 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3"} err="failed to get container status \"4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3\": rpc error: code = NotFound desc = could not find container \"4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3\": container with ID starting with 4dd5e4292e1c9bf940385b746b13afa322bcf1778f5f3189f5a31ce8297abcb3 not found: ID does not exist" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.223719 4895 scope.go:117] "RemoveContainer" containerID="fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f" Dec 06 09:04:44 crc kubenswrapper[4895]: E1206 09:04:44.224176 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f\": container with ID starting with fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f not found: ID does not exist" containerID="fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f" Dec 06 09:04:44 crc kubenswrapper[4895]: I1206 09:04:44.224205 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f"} err="failed to get container status \"fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f\": rpc error: code = NotFound desc = could not find container \"fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f\": container with ID starting with fd092103d557082f3dd91e1d99beb421ce74beb268c5847eaa7759361255a09f not found: ID does not exist" Dec 06 09:04:46 crc kubenswrapper[4895]: I1206 09:04:46.063305 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" path="/var/lib/kubelet/pods/fb21cecb-580b-4a38-92d7-b7b940b68258/volumes" Dec 06 09:05:12 crc kubenswrapper[4895]: I1206 09:05:12.850543 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:05:12 crc kubenswrapper[4895]: I1206 09:05:12.851877 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77b5c8f5cb-5gctx" Dec 06 09:05:15 crc kubenswrapper[4895]: I1206 09:05:15.371621 4895 scope.go:117] "RemoveContainer" containerID="3aae9c80283bb3dbc544bbb1942cb08dc71bf73e99ee8c198ff066f30c1eb0be" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.853029 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2pdp2"] Dec 06 09:05:36 crc kubenswrapper[4895]: E1206 09:05:36.854021 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerName="dnsmasq-dns" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.854038 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerName="dnsmasq-dns" Dec 06 09:05:36 crc kubenswrapper[4895]: E1206 09:05:36.854077 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerName="init" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.854083 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerName="init" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.854263 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb21cecb-580b-4a38-92d7-b7b940b68258" containerName="dnsmasq-dns" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.854925 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.871624 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2pdp2"] Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.928695 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhgdx\" (UniqueName: \"kubernetes.io/projected/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-kube-api-access-qhgdx\") pod \"nova-api-db-create-2pdp2\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.928767 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-operator-scripts\") pod \"nova-api-db-create-2pdp2\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.933582 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-l49xr"] Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.938821 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:36 crc kubenswrapper[4895]: I1206 09:05:36.947457 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l49xr"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.030086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhgdx\" (UniqueName: \"kubernetes.io/projected/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-kube-api-access-qhgdx\") pod \"nova-api-db-create-2pdp2\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.030142 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-operator-scripts\") pod \"nova-api-db-create-2pdp2\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.030548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-operator-scripts\") pod \"nova-cell0-db-create-l49xr\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.030733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sslsz\" (UniqueName: \"kubernetes.io/projected/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-kube-api-access-sslsz\") pod \"nova-cell0-db-create-l49xr\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.030866 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-operator-scripts\") pod \"nova-api-db-create-2pdp2\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.039539 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-373a-account-create-update-bfl5s"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.040817 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.045814 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.050875 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-373a-account-create-update-bfl5s"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.081576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhgdx\" (UniqueName: \"kubernetes.io/projected/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-kube-api-access-qhgdx\") pod \"nova-api-db-create-2pdp2\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.133356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34817151-f795-43b8-9cb9-649f029b2a3b-operator-scripts\") pod \"nova-api-373a-account-create-update-bfl5s\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.133418 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-operator-scripts\") pod \"nova-cell0-db-create-l49xr\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.133497 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sslsz\" (UniqueName: \"kubernetes.io/projected/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-kube-api-access-sslsz\") pod \"nova-cell0-db-create-l49xr\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.133574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4dj\" (UniqueName: \"kubernetes.io/projected/34817151-f795-43b8-9cb9-649f029b2a3b-kube-api-access-rh4dj\") pod \"nova-api-373a-account-create-update-bfl5s\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.134525 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-operator-scripts\") pod \"nova-cell0-db-create-l49xr\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.137356 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-n48s9"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.138524 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.147975 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n48s9"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.164294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sslsz\" (UniqueName: \"kubernetes.io/projected/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-kube-api-access-sslsz\") pod \"nova-cell0-db-create-l49xr\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.172581 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.235937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34817151-f795-43b8-9cb9-649f029b2a3b-operator-scripts\") pod \"nova-api-373a-account-create-update-bfl5s\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.236019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4dj\" (UniqueName: \"kubernetes.io/projected/34817151-f795-43b8-9cb9-649f029b2a3b-kube-api-access-rh4dj\") pod \"nova-api-373a-account-create-update-bfl5s\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.236768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34817151-f795-43b8-9cb9-649f029b2a3b-operator-scripts\") pod \"nova-api-373a-account-create-update-bfl5s\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.254961 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1aad-account-create-update-ghbcg"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.257095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.257611 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4dj\" (UniqueName: \"kubernetes.io/projected/34817151-f795-43b8-9cb9-649f029b2a3b-kube-api-access-rh4dj\") pod \"nova-api-373a-account-create-update-bfl5s\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.257723 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.260698 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.264640 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1aad-account-create-update-ghbcg"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.338138 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9hpx\" (UniqueName: \"kubernetes.io/projected/ab750e80-a339-4c38-89e3-a3595ecd1d09-kube-api-access-c9hpx\") pod \"nova-cell1-db-create-n48s9\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.338184 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab750e80-a339-4c38-89e3-a3595ecd1d09-operator-scripts\") pod \"nova-cell1-db-create-n48s9\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.354516 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.364271 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-09cb-account-create-update-fnzlp"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.365439 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.368426 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.373560 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-09cb-account-create-update-fnzlp"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.440399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9hpx\" (UniqueName: \"kubernetes.io/projected/ab750e80-a339-4c38-89e3-a3595ecd1d09-kube-api-access-c9hpx\") pod \"nova-cell1-db-create-n48s9\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.440462 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab750e80-a339-4c38-89e3-a3595ecd1d09-operator-scripts\") pod \"nova-cell1-db-create-n48s9\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.440525 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4123333a-291c-44d8-9cdd-15c599ffadd6-operator-scripts\") pod \"nova-cell0-1aad-account-create-update-ghbcg\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.440550 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscvb\" (UniqueName: \"kubernetes.io/projected/4123333a-291c-44d8-9cdd-15c599ffadd6-kube-api-access-vscvb\") pod \"nova-cell0-1aad-account-create-update-ghbcg\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.441236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab750e80-a339-4c38-89e3-a3595ecd1d09-operator-scripts\") pod \"nova-cell1-db-create-n48s9\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.470367 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9hpx\" (UniqueName: \"kubernetes.io/projected/ab750e80-a339-4c38-89e3-a3595ecd1d09-kube-api-access-c9hpx\") pod \"nova-cell1-db-create-n48s9\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.492888 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.542384 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6caea0b7-69b6-4654-bc0e-a97dda98981d-operator-scripts\") pod \"nova-cell1-09cb-account-create-update-fnzlp\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.542434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4123333a-291c-44d8-9cdd-15c599ffadd6-operator-scripts\") pod \"nova-cell0-1aad-account-create-update-ghbcg\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.542457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscvb\" (UniqueName: \"kubernetes.io/projected/4123333a-291c-44d8-9cdd-15c599ffadd6-kube-api-access-vscvb\") pod \"nova-cell0-1aad-account-create-update-ghbcg\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.542585 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpgq\" (UniqueName: \"kubernetes.io/projected/6caea0b7-69b6-4654-bc0e-a97dda98981d-kube-api-access-twpgq\") pod \"nova-cell1-09cb-account-create-update-fnzlp\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.543272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4123333a-291c-44d8-9cdd-15c599ffadd6-operator-scripts\") pod \"nova-cell0-1aad-account-create-update-ghbcg\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.562124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscvb\" (UniqueName: \"kubernetes.io/projected/4123333a-291c-44d8-9cdd-15c599ffadd6-kube-api-access-vscvb\") pod \"nova-cell0-1aad-account-create-update-ghbcg\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.643803 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6caea0b7-69b6-4654-bc0e-a97dda98981d-operator-scripts\") pod \"nova-cell1-09cb-account-create-update-fnzlp\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.643955 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpgq\" (UniqueName: \"kubernetes.io/projected/6caea0b7-69b6-4654-bc0e-a97dda98981d-kube-api-access-twpgq\") pod \"nova-cell1-09cb-account-create-update-fnzlp\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.644830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6caea0b7-69b6-4654-bc0e-a97dda98981d-operator-scripts\") pod \"nova-cell1-09cb-account-create-update-fnzlp\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.660448 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpgq\" (UniqueName: \"kubernetes.io/projected/6caea0b7-69b6-4654-bc0e-a97dda98981d-kube-api-access-twpgq\") pod \"nova-cell1-09cb-account-create-update-fnzlp\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.669321 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.686522 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.706332 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2pdp2"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.856253 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l49xr"] Dec 06 09:05:37 crc kubenswrapper[4895]: I1206 09:05:37.871979 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-373a-account-create-update-bfl5s"] Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.047300 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n48s9"] Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.202744 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1aad-account-create-update-ghbcg"] Dec 06 09:05:38 crc kubenswrapper[4895]: W1206 09:05:38.264657 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4123333a_291c_44d8_9cdd_15c599ffadd6.slice/crio-550ddbf80df1beebef41353111db7c598ad142a6a3ced3592594c2d9086ce99f WatchSource:0}: Error finding container 550ddbf80df1beebef41353111db7c598ad142a6a3ced3592594c2d9086ce99f: Status 404 returned error can't find the container with id 550ddbf80df1beebef41353111db7c598ad142a6a3ced3592594c2d9086ce99f Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.343390 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-09cb-account-create-update-fnzlp"] Dec 06 09:05:38 crc kubenswrapper[4895]: W1206 09:05:38.368784 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6caea0b7_69b6_4654_bc0e_a97dda98981d.slice/crio-2cd1baf3fb70190f98d507cbd67fb1a3dea5d4b3c0742970900755eb4534c4ee WatchSource:0}: Error finding container 2cd1baf3fb70190f98d507cbd67fb1a3dea5d4b3c0742970900755eb4534c4ee: Status 404 returned error can't find the container with id 2cd1baf3fb70190f98d507cbd67fb1a3dea5d4b3c0742970900755eb4534c4ee Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.716522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" event={"ID":"6caea0b7-69b6-4654-bc0e-a97dda98981d","Type":"ContainerStarted","Data":"4b85317d9cd87b7c550451ad40f3db5b296dc62ebe5071eae958fca013602a35"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.716568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" event={"ID":"6caea0b7-69b6-4654-bc0e-a97dda98981d","Type":"ContainerStarted","Data":"2cd1baf3fb70190f98d507cbd67fb1a3dea5d4b3c0742970900755eb4534c4ee"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.718938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" event={"ID":"4123333a-291c-44d8-9cdd-15c599ffadd6","Type":"ContainerStarted","Data":"e5723c7657c3008289be5672fa31c24d62b9455a03113581c116048fe40c2f4c"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.718968 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" event={"ID":"4123333a-291c-44d8-9cdd-15c599ffadd6","Type":"ContainerStarted","Data":"550ddbf80df1beebef41353111db7c598ad142a6a3ced3592594c2d9086ce99f"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.726245 4895 generic.go:334] "Generic (PLEG): container finished" podID="6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" containerID="714a7ca166d4072815eb288e46f512ddc94e2a7a3b34758973cd84dcab073eb0" exitCode=0 Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.726306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2pdp2" event={"ID":"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b","Type":"ContainerDied","Data":"714a7ca166d4072815eb288e46f512ddc94e2a7a3b34758973cd84dcab073eb0"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.726336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2pdp2" event={"ID":"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b","Type":"ContainerStarted","Data":"2c23943d42eb43ca5311fc4f2652cffd0a1e2f7d260ba33939cd0005190e9522"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.730131 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n48s9" event={"ID":"ab750e80-a339-4c38-89e3-a3595ecd1d09","Type":"ContainerStarted","Data":"5ed21c0b6445c91be94ac09f40e6f7c3248b494cbfe247c3730c1d96363ed040"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.730186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n48s9" event={"ID":"ab750e80-a339-4c38-89e3-a3595ecd1d09","Type":"ContainerStarted","Data":"a488053385ec8fc59871ee7016989fd7747c40b2fe6a41de8461934ce2486353"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.739122 4895 generic.go:334] "Generic (PLEG): container finished" podID="abdba00f-a4bf-4106-ace1-cc6b0dba6b42" containerID="90fc01dbdea20cc6705f742250a58cd8fef3ba87d88a3ea51fb37a64f6765854" exitCode=0 Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.739188 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l49xr" event={"ID":"abdba00f-a4bf-4106-ace1-cc6b0dba6b42","Type":"ContainerDied","Data":"90fc01dbdea20cc6705f742250a58cd8fef3ba87d88a3ea51fb37a64f6765854"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.739214 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l49xr" event={"ID":"abdba00f-a4bf-4106-ace1-cc6b0dba6b42","Type":"ContainerStarted","Data":"bd4701069fc33453a0f78d80eda7688d23499ad202f3369dc55b56c60158bd70"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.741160 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" podStartSLOduration=1.741118664 podStartE2EDuration="1.741118664s" podCreationTimestamp="2025-12-06 09:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:05:38.733305074 +0000 UTC m=+7701.134693944" watchObservedRunningTime="2025-12-06 09:05:38.741118664 +0000 UTC m=+7701.142507534" Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.741690 4895 generic.go:334] "Generic (PLEG): container finished" podID="34817151-f795-43b8-9cb9-649f029b2a3b" containerID="fafa6883662a6b3edb0e2c43c2b4b75c411533d75c27c1988946ebd3311440e9" exitCode=0 Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.741727 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-373a-account-create-update-bfl5s" event={"ID":"34817151-f795-43b8-9cb9-649f029b2a3b","Type":"ContainerDied","Data":"fafa6883662a6b3edb0e2c43c2b4b75c411533d75c27c1988946ebd3311440e9"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.741744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-373a-account-create-update-bfl5s" event={"ID":"34817151-f795-43b8-9cb9-649f029b2a3b","Type":"ContainerStarted","Data":"b7d3ee54e1715b4eacdb9fb3128fbb2ffffdb796482574e5f433c95aa1a1c002"} Dec 06 09:05:38 crc kubenswrapper[4895]: I1206 09:05:38.756516 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" podStartSLOduration=1.756497527 podStartE2EDuration="1.756497527s" podCreationTimestamp="2025-12-06 09:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:05:38.747002382 +0000 UTC m=+7701.148391252" watchObservedRunningTime="2025-12-06 09:05:38.756497527 +0000 UTC m=+7701.157886397" Dec 06 09:05:39 crc kubenswrapper[4895]: I1206 09:05:39.758666 4895 generic.go:334] "Generic (PLEG): container finished" podID="ab750e80-a339-4c38-89e3-a3595ecd1d09" containerID="5ed21c0b6445c91be94ac09f40e6f7c3248b494cbfe247c3730c1d96363ed040" exitCode=0 Dec 06 09:05:39 crc kubenswrapper[4895]: I1206 09:05:39.758715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n48s9" event={"ID":"ab750e80-a339-4c38-89e3-a3595ecd1d09","Type":"ContainerDied","Data":"5ed21c0b6445c91be94ac09f40e6f7c3248b494cbfe247c3730c1d96363ed040"} Dec 06 09:05:39 crc kubenswrapper[4895]: I1206 09:05:39.769358 4895 generic.go:334] "Generic (PLEG): container finished" podID="6caea0b7-69b6-4654-bc0e-a97dda98981d" containerID="4b85317d9cd87b7c550451ad40f3db5b296dc62ebe5071eae958fca013602a35" exitCode=0 Dec 06 09:05:39 crc kubenswrapper[4895]: I1206 09:05:39.769420 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" event={"ID":"6caea0b7-69b6-4654-bc0e-a97dda98981d","Type":"ContainerDied","Data":"4b85317d9cd87b7c550451ad40f3db5b296dc62ebe5071eae958fca013602a35"} Dec 06 09:05:39 crc kubenswrapper[4895]: I1206 09:05:39.771559 4895 generic.go:334] "Generic (PLEG): container finished" podID="4123333a-291c-44d8-9cdd-15c599ffadd6" containerID="e5723c7657c3008289be5672fa31c24d62b9455a03113581c116048fe40c2f4c" exitCode=0 Dec 06 09:05:39 crc kubenswrapper[4895]: I1206 09:05:39.771781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" event={"ID":"4123333a-291c-44d8-9cdd-15c599ffadd6","Type":"ContainerDied","Data":"e5723c7657c3008289be5672fa31c24d62b9455a03113581c116048fe40c2f4c"} Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.183392 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.307563 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9hpx\" (UniqueName: \"kubernetes.io/projected/ab750e80-a339-4c38-89e3-a3595ecd1d09-kube-api-access-c9hpx\") pod \"ab750e80-a339-4c38-89e3-a3595ecd1d09\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.307677 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab750e80-a339-4c38-89e3-a3595ecd1d09-operator-scripts\") pod \"ab750e80-a339-4c38-89e3-a3595ecd1d09\" (UID: \"ab750e80-a339-4c38-89e3-a3595ecd1d09\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.308753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab750e80-a339-4c38-89e3-a3595ecd1d09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab750e80-a339-4c38-89e3-a3595ecd1d09" (UID: "ab750e80-a339-4c38-89e3-a3595ecd1d09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.313032 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab750e80-a339-4c38-89e3-a3595ecd1d09-kube-api-access-c9hpx" (OuterVolumeSpecName: "kube-api-access-c9hpx") pod "ab750e80-a339-4c38-89e3-a3595ecd1d09" (UID: "ab750e80-a339-4c38-89e3-a3595ecd1d09"). InnerVolumeSpecName "kube-api-access-c9hpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.353593 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.357503 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.361928 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.410375 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9hpx\" (UniqueName: \"kubernetes.io/projected/ab750e80-a339-4c38-89e3-a3595ecd1d09-kube-api-access-c9hpx\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.410411 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab750e80-a339-4c38-89e3-a3595ecd1d09-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.511988 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-operator-scripts\") pod \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.512055 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sslsz\" (UniqueName: \"kubernetes.io/projected/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-kube-api-access-sslsz\") pod \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.512386 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh4dj\" (UniqueName: \"kubernetes.io/projected/34817151-f795-43b8-9cb9-649f029b2a3b-kube-api-access-rh4dj\") pod \"34817151-f795-43b8-9cb9-649f029b2a3b\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.512447 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhgdx\" (UniqueName: \"kubernetes.io/projected/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-kube-api-access-qhgdx\") pod \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\" (UID: \"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.512605 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-operator-scripts\") pod \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\" (UID: \"abdba00f-a4bf-4106-ace1-cc6b0dba6b42\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.512659 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34817151-f795-43b8-9cb9-649f029b2a3b-operator-scripts\") pod \"34817151-f795-43b8-9cb9-649f029b2a3b\" (UID: \"34817151-f795-43b8-9cb9-649f029b2a3b\") " Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.512740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" (UID: "6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.513009 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.513274 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abdba00f-a4bf-4106-ace1-cc6b0dba6b42" (UID: "abdba00f-a4bf-4106-ace1-cc6b0dba6b42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.513324 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34817151-f795-43b8-9cb9-649f029b2a3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34817151-f795-43b8-9cb9-649f029b2a3b" (UID: "34817151-f795-43b8-9cb9-649f029b2a3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.516088 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34817151-f795-43b8-9cb9-649f029b2a3b-kube-api-access-rh4dj" (OuterVolumeSpecName: "kube-api-access-rh4dj") pod "34817151-f795-43b8-9cb9-649f029b2a3b" (UID: "34817151-f795-43b8-9cb9-649f029b2a3b"). InnerVolumeSpecName "kube-api-access-rh4dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.516314 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-kube-api-access-qhgdx" (OuterVolumeSpecName: "kube-api-access-qhgdx") pod "6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" (UID: "6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b"). InnerVolumeSpecName "kube-api-access-qhgdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.516566 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-kube-api-access-sslsz" (OuterVolumeSpecName: "kube-api-access-sslsz") pod "abdba00f-a4bf-4106-ace1-cc6b0dba6b42" (UID: "abdba00f-a4bf-4106-ace1-cc6b0dba6b42"). InnerVolumeSpecName "kube-api-access-sslsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.615202 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.615236 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34817151-f795-43b8-9cb9-649f029b2a3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.615247 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sslsz\" (UniqueName: \"kubernetes.io/projected/abdba00f-a4bf-4106-ace1-cc6b0dba6b42-kube-api-access-sslsz\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.615256 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh4dj\" (UniqueName: \"kubernetes.io/projected/34817151-f795-43b8-9cb9-649f029b2a3b-kube-api-access-rh4dj\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.615265 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhgdx\" (UniqueName: \"kubernetes.io/projected/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b-kube-api-access-qhgdx\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.784766 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2pdp2" event={"ID":"6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b","Type":"ContainerDied","Data":"2c23943d42eb43ca5311fc4f2652cffd0a1e2f7d260ba33939cd0005190e9522"} Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.784821 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c23943d42eb43ca5311fc4f2652cffd0a1e2f7d260ba33939cd0005190e9522" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.785788 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2pdp2" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.786530 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n48s9" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.786516 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n48s9" event={"ID":"ab750e80-a339-4c38-89e3-a3595ecd1d09","Type":"ContainerDied","Data":"a488053385ec8fc59871ee7016989fd7747c40b2fe6a41de8461934ce2486353"} Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.786687 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a488053385ec8fc59871ee7016989fd7747c40b2fe6a41de8461934ce2486353" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.788846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l49xr" event={"ID":"abdba00f-a4bf-4106-ace1-cc6b0dba6b42","Type":"ContainerDied","Data":"bd4701069fc33453a0f78d80eda7688d23499ad202f3369dc55b56c60158bd70"} Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.788876 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4701069fc33453a0f78d80eda7688d23499ad202f3369dc55b56c60158bd70" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.788948 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l49xr" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.790506 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-373a-account-create-update-bfl5s" Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.793797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-373a-account-create-update-bfl5s" event={"ID":"34817151-f795-43b8-9cb9-649f029b2a3b","Type":"ContainerDied","Data":"b7d3ee54e1715b4eacdb9fb3128fbb2ffffdb796482574e5f433c95aa1a1c002"} Dec 06 09:05:40 crc kubenswrapper[4895]: I1206 09:05:40.793858 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d3ee54e1715b4eacdb9fb3128fbb2ffffdb796482574e5f433c95aa1a1c002" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.284990 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.291359 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.428202 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twpgq\" (UniqueName: \"kubernetes.io/projected/6caea0b7-69b6-4654-bc0e-a97dda98981d-kube-api-access-twpgq\") pod \"6caea0b7-69b6-4654-bc0e-a97dda98981d\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.428267 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6caea0b7-69b6-4654-bc0e-a97dda98981d-operator-scripts\") pod \"6caea0b7-69b6-4654-bc0e-a97dda98981d\" (UID: \"6caea0b7-69b6-4654-bc0e-a97dda98981d\") " Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.428436 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vscvb\" (UniqueName: \"kubernetes.io/projected/4123333a-291c-44d8-9cdd-15c599ffadd6-kube-api-access-vscvb\") pod \"4123333a-291c-44d8-9cdd-15c599ffadd6\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.428507 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4123333a-291c-44d8-9cdd-15c599ffadd6-operator-scripts\") pod \"4123333a-291c-44d8-9cdd-15c599ffadd6\" (UID: \"4123333a-291c-44d8-9cdd-15c599ffadd6\") " Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.429229 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4123333a-291c-44d8-9cdd-15c599ffadd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4123333a-291c-44d8-9cdd-15c599ffadd6" (UID: "4123333a-291c-44d8-9cdd-15c599ffadd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.429975 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6caea0b7-69b6-4654-bc0e-a97dda98981d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6caea0b7-69b6-4654-bc0e-a97dda98981d" (UID: "6caea0b7-69b6-4654-bc0e-a97dda98981d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.432803 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4123333a-291c-44d8-9cdd-15c599ffadd6-kube-api-access-vscvb" (OuterVolumeSpecName: "kube-api-access-vscvb") pod "4123333a-291c-44d8-9cdd-15c599ffadd6" (UID: "4123333a-291c-44d8-9cdd-15c599ffadd6"). InnerVolumeSpecName "kube-api-access-vscvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.437776 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6caea0b7-69b6-4654-bc0e-a97dda98981d-kube-api-access-twpgq" (OuterVolumeSpecName: "kube-api-access-twpgq") pod "6caea0b7-69b6-4654-bc0e-a97dda98981d" (UID: "6caea0b7-69b6-4654-bc0e-a97dda98981d"). InnerVolumeSpecName "kube-api-access-twpgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.530970 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twpgq\" (UniqueName: \"kubernetes.io/projected/6caea0b7-69b6-4654-bc0e-a97dda98981d-kube-api-access-twpgq\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.531008 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6caea0b7-69b6-4654-bc0e-a97dda98981d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.531018 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vscvb\" (UniqueName: \"kubernetes.io/projected/4123333a-291c-44d8-9cdd-15c599ffadd6-kube-api-access-vscvb\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.531026 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4123333a-291c-44d8-9cdd-15c599ffadd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.802115 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" event={"ID":"4123333a-291c-44d8-9cdd-15c599ffadd6","Type":"ContainerDied","Data":"550ddbf80df1beebef41353111db7c598ad142a6a3ced3592594c2d9086ce99f"} Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.802160 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="550ddbf80df1beebef41353111db7c598ad142a6a3ced3592594c2d9086ce99f" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.802233 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1aad-account-create-update-ghbcg" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.812085 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" event={"ID":"6caea0b7-69b6-4654-bc0e-a97dda98981d","Type":"ContainerDied","Data":"2cd1baf3fb70190f98d507cbd67fb1a3dea5d4b3c0742970900755eb4534c4ee"} Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.812131 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-09cb-account-create-update-fnzlp" Dec 06 09:05:41 crc kubenswrapper[4895]: I1206 09:05:41.812147 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd1baf3fb70190f98d507cbd67fb1a3dea5d4b3c0742970900755eb4534c4ee" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.469535 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwhq"] Dec 06 09:05:42 crc kubenswrapper[4895]: E1206 09:05:42.470155 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caea0b7-69b6-4654-bc0e-a97dda98981d" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470169 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caea0b7-69b6-4654-bc0e-a97dda98981d" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: E1206 09:05:42.470186 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470191 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: E1206 09:05:42.470201 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab750e80-a339-4c38-89e3-a3595ecd1d09" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470207 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab750e80-a339-4c38-89e3-a3595ecd1d09" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: E1206 09:05:42.470231 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34817151-f795-43b8-9cb9-649f029b2a3b" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470237 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="34817151-f795-43b8-9cb9-649f029b2a3b" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: E1206 09:05:42.470245 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdba00f-a4bf-4106-ace1-cc6b0dba6b42" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470252 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdba00f-a4bf-4106-ace1-cc6b0dba6b42" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: E1206 09:05:42.470272 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4123333a-291c-44d8-9cdd-15c599ffadd6" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4123333a-291c-44d8-9cdd-15c599ffadd6" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470431 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470441 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4123333a-291c-44d8-9cdd-15c599ffadd6" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470450 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6caea0b7-69b6-4654-bc0e-a97dda98981d" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470464 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="34817151-f795-43b8-9cb9-649f029b2a3b" containerName="mariadb-account-create-update" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470487 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="abdba00f-a4bf-4106-ace1-cc6b0dba6b42" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.470498 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab750e80-a339-4c38-89e3-a3595ecd1d09" containerName="mariadb-database-create" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.471091 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.479036 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.479831 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.485281 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kx5th" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.496178 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwhq"] Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.651705 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-scripts\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.651793 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjcn\" (UniqueName: \"kubernetes.io/projected/f7d34db9-637c-4d97-a22b-0853b943a309-kube-api-access-vdjcn\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.651821 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.651850 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-config-data\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.753796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjcn\" (UniqueName: \"kubernetes.io/projected/f7d34db9-637c-4d97-a22b-0853b943a309-kube-api-access-vdjcn\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.753847 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.753880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-config-data\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.753960 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-scripts\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.762705 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-scripts\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.762933 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-config-data\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.763222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.771828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjcn\" (UniqueName: \"kubernetes.io/projected/f7d34db9-637c-4d97-a22b-0853b943a309-kube-api-access-vdjcn\") pod \"nova-cell0-conductor-db-sync-bhwhq\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:42 crc kubenswrapper[4895]: I1206 09:05:42.788233 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:05:43 crc kubenswrapper[4895]: I1206 09:05:43.228633 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwhq"] Dec 06 09:05:43 crc kubenswrapper[4895]: W1206 09:05:43.234925 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7d34db9_637c_4d97_a22b_0853b943a309.slice/crio-c8b25afdfe36ff263ed37ecb27fe513abab6359b581b07a1268a14cb3dd96e41 WatchSource:0}: Error finding container c8b25afdfe36ff263ed37ecb27fe513abab6359b581b07a1268a14cb3dd96e41: Status 404 returned error can't find the container with id c8b25afdfe36ff263ed37ecb27fe513abab6359b581b07a1268a14cb3dd96e41 Dec 06 09:05:43 crc kubenswrapper[4895]: I1206 09:05:43.839699 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" event={"ID":"f7d34db9-637c-4d97-a22b-0853b943a309","Type":"ContainerStarted","Data":"c8b25afdfe36ff263ed37ecb27fe513abab6359b581b07a1268a14cb3dd96e41"} Dec 06 09:05:53 crc kubenswrapper[4895]: I1206 09:05:53.947212 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" event={"ID":"f7d34db9-637c-4d97-a22b-0853b943a309","Type":"ContainerStarted","Data":"9b7462671cfcd74edba7ca2390b3db2e7054326163f59bc53573ccbc61bf0ba5"} Dec 06 09:05:53 crc kubenswrapper[4895]: I1206 09:05:53.977931 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" podStartSLOduration=2.424688501 podStartE2EDuration="11.977910251s" podCreationTimestamp="2025-12-06 09:05:42 +0000 UTC" firstStartedPulling="2025-12-06 09:05:43.238810581 +0000 UTC m=+7705.640199461" lastFinishedPulling="2025-12-06 09:05:52.792032331 +0000 UTC m=+7715.193421211" observedRunningTime="2025-12-06 09:05:53.966899025 +0000 UTC m=+7716.368287905" watchObservedRunningTime="2025-12-06 09:05:53.977910251 +0000 UTC m=+7716.379299131" Dec 06 09:05:59 crc kubenswrapper[4895]: I1206 09:05:59.005553 4895 generic.go:334] "Generic (PLEG): container finished" podID="f7d34db9-637c-4d97-a22b-0853b943a309" containerID="9b7462671cfcd74edba7ca2390b3db2e7054326163f59bc53573ccbc61bf0ba5" exitCode=0 Dec 06 09:05:59 crc kubenswrapper[4895]: I1206 09:05:59.005706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" event={"ID":"f7d34db9-637c-4d97-a22b-0853b943a309","Type":"ContainerDied","Data":"9b7462671cfcd74edba7ca2390b3db2e7054326163f59bc53573ccbc61bf0ba5"} Dec 06 09:05:59 crc kubenswrapper[4895]: I1206 09:05:59.696333 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:05:59 crc kubenswrapper[4895]: I1206 09:05:59.696743 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.349643 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.485335 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-combined-ca-bundle\") pod \"f7d34db9-637c-4d97-a22b-0853b943a309\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.485533 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-scripts\") pod \"f7d34db9-637c-4d97-a22b-0853b943a309\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.485706 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-config-data\") pod \"f7d34db9-637c-4d97-a22b-0853b943a309\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.485770 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdjcn\" (UniqueName: \"kubernetes.io/projected/f7d34db9-637c-4d97-a22b-0853b943a309-kube-api-access-vdjcn\") pod \"f7d34db9-637c-4d97-a22b-0853b943a309\" (UID: \"f7d34db9-637c-4d97-a22b-0853b943a309\") " Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.490810 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-scripts" (OuterVolumeSpecName: "scripts") pod "f7d34db9-637c-4d97-a22b-0853b943a309" (UID: "f7d34db9-637c-4d97-a22b-0853b943a309"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.491215 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d34db9-637c-4d97-a22b-0853b943a309-kube-api-access-vdjcn" (OuterVolumeSpecName: "kube-api-access-vdjcn") pod "f7d34db9-637c-4d97-a22b-0853b943a309" (UID: "f7d34db9-637c-4d97-a22b-0853b943a309"). InnerVolumeSpecName "kube-api-access-vdjcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.509901 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7d34db9-637c-4d97-a22b-0853b943a309" (UID: "f7d34db9-637c-4d97-a22b-0853b943a309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.520601 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-config-data" (OuterVolumeSpecName: "config-data") pod "f7d34db9-637c-4d97-a22b-0853b943a309" (UID: "f7d34db9-637c-4d97-a22b-0853b943a309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.588371 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.588408 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.588418 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdjcn\" (UniqueName: \"kubernetes.io/projected/f7d34db9-637c-4d97-a22b-0853b943a309-kube-api-access-vdjcn\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:00 crc kubenswrapper[4895]: I1206 09:06:00.588429 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d34db9-637c-4d97-a22b-0853b943a309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.029300 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" event={"ID":"f7d34db9-637c-4d97-a22b-0853b943a309","Type":"ContainerDied","Data":"c8b25afdfe36ff263ed37ecb27fe513abab6359b581b07a1268a14cb3dd96e41"} Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.029343 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b25afdfe36ff263ed37ecb27fe513abab6359b581b07a1268a14cb3dd96e41" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.029429 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwhq" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.142073 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:06:01 crc kubenswrapper[4895]: E1206 09:06:01.142533 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d34db9-637c-4d97-a22b-0853b943a309" containerName="nova-cell0-conductor-db-sync" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.142557 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d34db9-637c-4d97-a22b-0853b943a309" containerName="nova-cell0-conductor-db-sync" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.142811 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d34db9-637c-4d97-a22b-0853b943a309" containerName="nova-cell0-conductor-db-sync" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.143596 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.151016 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kx5th" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.153883 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.155002 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.198678 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.198769 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntw4\" (UniqueName: \"kubernetes.io/projected/12645969-044a-4bbf-945f-076d512123df-kube-api-access-sntw4\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.198909 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.300757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.301062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sntw4\" (UniqueName: \"kubernetes.io/projected/12645969-044a-4bbf-945f-076d512123df-kube-api-access-sntw4\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.301262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.306574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.310878 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.322189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sntw4\" (UniqueName: \"kubernetes.io/projected/12645969-044a-4bbf-945f-076d512123df-kube-api-access-sntw4\") pod \"nova-cell0-conductor-0\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.475989 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:01 crc kubenswrapper[4895]: I1206 09:06:01.959071 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:06:01 crc kubenswrapper[4895]: W1206 09:06:01.973843 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12645969_044a_4bbf_945f_076d512123df.slice/crio-cd7f2e5f1efae9a0fe28aa928f673cbb8b1a53bb992a9925fc9587a23f74f6ae WatchSource:0}: Error finding container cd7f2e5f1efae9a0fe28aa928f673cbb8b1a53bb992a9925fc9587a23f74f6ae: Status 404 returned error can't find the container with id cd7f2e5f1efae9a0fe28aa928f673cbb8b1a53bb992a9925fc9587a23f74f6ae Dec 06 09:06:02 crc kubenswrapper[4895]: I1206 09:06:02.038775 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"12645969-044a-4bbf-945f-076d512123df","Type":"ContainerStarted","Data":"cd7f2e5f1efae9a0fe28aa928f673cbb8b1a53bb992a9925fc9587a23f74f6ae"} Dec 06 09:06:03 crc kubenswrapper[4895]: I1206 09:06:03.050962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"12645969-044a-4bbf-945f-076d512123df","Type":"ContainerStarted","Data":"a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800"} Dec 06 09:06:03 crc kubenswrapper[4895]: I1206 09:06:03.051406 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:03 crc kubenswrapper[4895]: I1206 09:06:03.092799 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.092779135 podStartE2EDuration="2.092779135s" podCreationTimestamp="2025-12-06 09:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:03.074074612 +0000 UTC m=+7725.475463482" watchObservedRunningTime="2025-12-06 09:06:03.092779135 +0000 UTC m=+7725.494168015" Dec 06 09:06:11 crc kubenswrapper[4895]: I1206 09:06:11.509110 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 09:06:11 crc kubenswrapper[4895]: I1206 09:06:11.962165 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5m5mq"] Dec 06 09:06:11 crc kubenswrapper[4895]: I1206 09:06:11.963804 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:11 crc kubenswrapper[4895]: I1206 09:06:11.966319 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 06 09:06:11 crc kubenswrapper[4895]: I1206 09:06:11.968313 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 06 09:06:11 crc kubenswrapper[4895]: I1206 09:06:11.979303 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5m5mq"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.134489 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-scripts\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.134620 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.134641 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7l55\" (UniqueName: \"kubernetes.io/projected/6edbfe07-8c96-4516-9eb4-e499e9a060f2-kube-api-access-b7l55\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.134711 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-config-data\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.143029 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.144639 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.148755 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.169296 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.201532 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.203172 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.208036 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.229465 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.237779 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcmgk\" (UniqueName: \"kubernetes.io/projected/b0d674af-bf62-42f8-8f23-107ce53459e2-kube-api-access-gcmgk\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-scripts\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238120 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d674af-bf62-42f8-8f23-107ce53459e2-logs\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7l55\" (UniqueName: \"kubernetes.io/projected/6edbfe07-8c96-4516-9eb4-e499e9a060f2-kube-api-access-b7l55\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238416 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-config-data\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238669 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-config-data\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.238764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.248012 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-config-data\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.259968 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.273995 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-scripts\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.312054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7l55\" (UniqueName: \"kubernetes.io/projected/6edbfe07-8c96-4516-9eb4-e499e9a060f2-kube-api-access-b7l55\") pod \"nova-cell0-cell-mapping-5m5mq\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.337052 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.340110 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d674af-bf62-42f8-8f23-107ce53459e2-logs\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.340322 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.340428 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-config-data\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.340539 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.340833 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcmgk\" (UniqueName: \"kubernetes.io/projected/b0d674af-bf62-42f8-8f23-107ce53459e2-kube-api-access-gcmgk\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.340906 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-config-data\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.341001 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715d61f1-f150-4bf6-af77-d09bd2d956e2-logs\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.341087 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7fd\" (UniqueName: \"kubernetes.io/projected/715d61f1-f150-4bf6-af77-d09bd2d956e2-kube-api-access-xl7fd\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.343089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d674af-bf62-42f8-8f23-107ce53459e2-logs\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.341045 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.350050 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.353203 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-config-data\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.354135 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.403919 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.420017 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcmgk\" (UniqueName: \"kubernetes.io/projected/b0d674af-bf62-42f8-8f23-107ce53459e2-kube-api-access-gcmgk\") pod \"nova-api-0\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.444374 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrjp\" (UniqueName: \"kubernetes.io/projected/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-kube-api-access-chrjp\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.444453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-config-data\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.444492 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-config-data\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.444720 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715d61f1-f150-4bf6-af77-d09bd2d956e2-logs\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.444799 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl7fd\" (UniqueName: \"kubernetes.io/projected/715d61f1-f150-4bf6-af77-d09bd2d956e2-kube-api-access-xl7fd\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.444985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.445088 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.446087 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715d61f1-f150-4bf6-af77-d09bd2d956e2-logs\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.454181 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.462996 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-config-data\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.485532 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.486728 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.487090 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl7fd\" (UniqueName: \"kubernetes.io/projected/715d61f1-f150-4bf6-af77-d09bd2d956e2-kube-api-access-xl7fd\") pod \"nova-metadata-0\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.494860 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.498258 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.503960 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd8bf4d65-wnvnc"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.505446 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.519224 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.524901 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.547102 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.547358 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chrjp\" (UniqueName: \"kubernetes.io/projected/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-kube-api-access-chrjp\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.547465 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-config-data\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.554794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-config-data\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.557389 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd8bf4d65-wnvnc"] Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.560134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.599251 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.634092 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chrjp\" (UniqueName: \"kubernetes.io/projected/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-kube-api-access-chrjp\") pod \"nova-scheduler-0\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.649752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kxr\" (UniqueName: \"kubernetes.io/projected/fad45921-7278-4840-a086-3e463498662e-kube-api-access-v6kxr\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.649830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-dns-svc\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.649878 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnhvb\" (UniqueName: \"kubernetes.io/projected/ea340439-debf-49d8-aec1-002d6299334c-kube-api-access-qnhvb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.649918 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.649991 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-config\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.650078 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.650116 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.650214 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.712088 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755221 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755283 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755424 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kxr\" (UniqueName: \"kubernetes.io/projected/fad45921-7278-4840-a086-3e463498662e-kube-api-access-v6kxr\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-dns-svc\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755506 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnhvb\" (UniqueName: \"kubernetes.io/projected/ea340439-debf-49d8-aec1-002d6299334c-kube-api-access-qnhvb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755539 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.755571 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-config\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.757989 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.762933 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.763014 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-config\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.766554 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-dns-svc\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.766799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.775340 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.775576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnhvb\" (UniqueName: \"kubernetes.io/projected/ea340439-debf-49d8-aec1-002d6299334c-kube-api-access-qnhvb\") pod \"dnsmasq-dns-6fd8bf4d65-wnvnc\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.781315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kxr\" (UniqueName: \"kubernetes.io/projected/fad45921-7278-4840-a086-3e463498662e-kube-api-access-v6kxr\") pod \"nova-cell1-novncproxy-0\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.819999 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:12 crc kubenswrapper[4895]: I1206 09:06:12.854614 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.157484 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:13 crc kubenswrapper[4895]: W1206 09:06:13.161776 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d674af_bf62_42f8_8f23_107ce53459e2.slice/crio-574637e2a26f3cad531a5304f429e40fee55c62e1104329af88f496e8a5ef873 WatchSource:0}: Error finding container 574637e2a26f3cad531a5304f429e40fee55c62e1104329af88f496e8a5ef873: Status 404 returned error can't find the container with id 574637e2a26f3cad531a5304f429e40fee55c62e1104329af88f496e8a5ef873 Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.186995 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d674af-bf62-42f8-8f23-107ce53459e2","Type":"ContainerStarted","Data":"574637e2a26f3cad531a5304f429e40fee55c62e1104329af88f496e8a5ef873"} Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.240876 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrqc6"] Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.245590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.255062 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.255641 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.265535 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrqc6"] Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.296243 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5m5mq"] Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.337601 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:13 crc kubenswrapper[4895]: W1206 09:06:13.365704 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e4b588d_3ad0_49bd_8f84_aef03fc71bfc.slice/crio-5a3c6391b5971e118b8b6c14b7074190550b26d45ab249886265f0e87220f472 WatchSource:0}: Error finding container 5a3c6391b5971e118b8b6c14b7074190550b26d45ab249886265f0e87220f472: Status 404 returned error can't find the container with id 5a3c6391b5971e118b8b6c14b7074190550b26d45ab249886265f0e87220f472 Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.372060 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.380425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxmk\" (UniqueName: \"kubernetes.io/projected/58a0d047-940d-4594-aff1-4a8e67fe8fdc-kube-api-access-5hxmk\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.380495 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-config-data\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.380529 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.380655 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-scripts\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.484771 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxmk\" (UniqueName: \"kubernetes.io/projected/58a0d047-940d-4594-aff1-4a8e67fe8fdc-kube-api-access-5hxmk\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.484851 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-config-data\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.484882 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.485087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-scripts\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.498153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-config-data\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.498562 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-scripts\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.504752 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.509693 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:06:13 crc kubenswrapper[4895]: W1206 09:06:13.512058 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfad45921_7278_4840_a086_3e463498662e.slice/crio-7c3af5de7f4b903486d03955e8fe7e22d77f99ae5dd4bbb6acc335dc0c4c5eec WatchSource:0}: Error finding container 7c3af5de7f4b903486d03955e8fe7e22d77f99ae5dd4bbb6acc335dc0c4c5eec: Status 404 returned error can't find the container with id 7c3af5de7f4b903486d03955e8fe7e22d77f99ae5dd4bbb6acc335dc0c4c5eec Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.514031 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxmk\" (UniqueName: \"kubernetes.io/projected/58a0d047-940d-4594-aff1-4a8e67fe8fdc-kube-api-access-5hxmk\") pod \"nova-cell1-conductor-db-sync-mrqc6\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.519654 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd8bf4d65-wnvnc"] Dec 06 09:06:13 crc kubenswrapper[4895]: I1206 09:06:13.741184 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.203253 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fad45921-7278-4840-a086-3e463498662e","Type":"ContainerStarted","Data":"7c3af5de7f4b903486d03955e8fe7e22d77f99ae5dd4bbb6acc335dc0c4c5eec"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.205521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc","Type":"ContainerStarted","Data":"5a3c6391b5971e118b8b6c14b7074190550b26d45ab249886265f0e87220f472"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.206728 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"715d61f1-f150-4bf6-af77-d09bd2d956e2","Type":"ContainerStarted","Data":"b341ae17495929d4f99ee75097ee02cda924e57ef2b846584a3d5a769ed13877"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.209583 4895 generic.go:334] "Generic (PLEG): container finished" podID="ea340439-debf-49d8-aec1-002d6299334c" containerID="4d7d71eb7bdd226b39e25e4d2f213c727938957b8b4e99120105b3c76e0e7f2f" exitCode=0 Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.209662 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" event={"ID":"ea340439-debf-49d8-aec1-002d6299334c","Type":"ContainerDied","Data":"4d7d71eb7bdd226b39e25e4d2f213c727938957b8b4e99120105b3c76e0e7f2f"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.209682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" event={"ID":"ea340439-debf-49d8-aec1-002d6299334c","Type":"ContainerStarted","Data":"27627d4d699fbf446c6bbdf0ffc1bce8333e1fb66256fffc0154994c7efe74bc"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.220794 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5m5mq" event={"ID":"6edbfe07-8c96-4516-9eb4-e499e9a060f2","Type":"ContainerStarted","Data":"03170f751ebc5a56f7e888d174a47cbb75edd0936a2e70c214be74c650bbb650"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.220854 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5m5mq" event={"ID":"6edbfe07-8c96-4516-9eb4-e499e9a060f2","Type":"ContainerStarted","Data":"f1385a067751dc131fb0fe2ceec0a77c0abccc66f3ca0a7a87654af73cdc6d35"} Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.258677 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5m5mq" podStartSLOduration=3.258658732 podStartE2EDuration="3.258658732s" podCreationTimestamp="2025-12-06 09:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:14.253829112 +0000 UTC m=+7736.655217982" watchObservedRunningTime="2025-12-06 09:06:14.258658732 +0000 UTC m=+7736.660047612" Dec 06 09:06:14 crc kubenswrapper[4895]: I1206 09:06:14.284179 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrqc6"] Dec 06 09:06:15 crc kubenswrapper[4895]: I1206 09:06:15.442819 4895 scope.go:117] "RemoveContainer" containerID="5ee58780191a1eda3550d6ab2e7eb27e262674d8151ce6a2268f411d6930f311" Dec 06 09:06:15 crc kubenswrapper[4895]: W1206 09:06:15.844640 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a0d047_940d_4594_aff1_4a8e67fe8fdc.slice/crio-1d75dd0decf913c58e33e07df1eccde24cb727b5fd4be3a21b7fa5b1d33bb496 WatchSource:0}: Error finding container 1d75dd0decf913c58e33e07df1eccde24cb727b5fd4be3a21b7fa5b1d33bb496: Status 404 returned error can't find the container with id 1d75dd0decf913c58e33e07df1eccde24cb727b5fd4be3a21b7fa5b1d33bb496 Dec 06 09:06:16 crc kubenswrapper[4895]: I1206 09:06:16.253135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" event={"ID":"58a0d047-940d-4594-aff1-4a8e67fe8fdc","Type":"ContainerStarted","Data":"1d75dd0decf913c58e33e07df1eccde24cb727b5fd4be3a21b7fa5b1d33bb496"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.265882 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d674af-bf62-42f8-8f23-107ce53459e2","Type":"ContainerStarted","Data":"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.266209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d674af-bf62-42f8-8f23-107ce53459e2","Type":"ContainerStarted","Data":"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.267902 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" event={"ID":"58a0d047-940d-4594-aff1-4a8e67fe8fdc","Type":"ContainerStarted","Data":"bd19d9c1e03905df237163a2c9568fef61091a12b6e2f8ab72035c50018f8279"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.270076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fad45921-7278-4840-a086-3e463498662e","Type":"ContainerStarted","Data":"132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.273457 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc","Type":"ContainerStarted","Data":"6c62373cc910fcc8a1bee2f95196bc9c8e38621de4778be3a23a93bad4675bfd"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.275703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"715d61f1-f150-4bf6-af77-d09bd2d956e2","Type":"ContainerStarted","Data":"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.275738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"715d61f1-f150-4bf6-af77-d09bd2d956e2","Type":"ContainerStarted","Data":"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.278828 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" event={"ID":"ea340439-debf-49d8-aec1-002d6299334c","Type":"ContainerStarted","Data":"44e7a66b43ca519e7b7dbe47399fdbeea08ae232a1aac81419dde5d48e60c6f5"} Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.278951 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.293602 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.914647188 podStartE2EDuration="5.293585299s" podCreationTimestamp="2025-12-06 09:06:12 +0000 UTC" firstStartedPulling="2025-12-06 09:06:13.164900566 +0000 UTC m=+7735.566289436" lastFinishedPulling="2025-12-06 09:06:16.543838677 +0000 UTC m=+7738.945227547" observedRunningTime="2025-12-06 09:06:17.285229364 +0000 UTC m=+7739.686618255" watchObservedRunningTime="2025-12-06 09:06:17.293585299 +0000 UTC m=+7739.694974169" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.315966 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" podStartSLOduration=4.31594403 podStartE2EDuration="4.31594403s" podCreationTimestamp="2025-12-06 09:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:17.308639204 +0000 UTC m=+7739.710028074" watchObservedRunningTime="2025-12-06 09:06:17.31594403 +0000 UTC m=+7739.717332890" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.327284 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.091889633 podStartE2EDuration="5.327265444s" podCreationTimestamp="2025-12-06 09:06:12 +0000 UTC" firstStartedPulling="2025-12-06 09:06:13.295458885 +0000 UTC m=+7735.696847755" lastFinishedPulling="2025-12-06 09:06:16.530834706 +0000 UTC m=+7738.932223566" observedRunningTime="2025-12-06 09:06:17.321348415 +0000 UTC m=+7739.722737285" watchObservedRunningTime="2025-12-06 09:06:17.327265444 +0000 UTC m=+7739.728654314" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.354532 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.191660452 podStartE2EDuration="5.354514056s" podCreationTimestamp="2025-12-06 09:06:12 +0000 UTC" firstStartedPulling="2025-12-06 09:06:13.368288341 +0000 UTC m=+7735.769677221" lastFinishedPulling="2025-12-06 09:06:16.531141955 +0000 UTC m=+7738.932530825" observedRunningTime="2025-12-06 09:06:17.346754787 +0000 UTC m=+7739.748143657" watchObservedRunningTime="2025-12-06 09:06:17.354514056 +0000 UTC m=+7739.755902926" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.363705 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" podStartSLOduration=5.363688732 podStartE2EDuration="5.363688732s" podCreationTimestamp="2025-12-06 09:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:17.361682839 +0000 UTC m=+7739.763071709" watchObservedRunningTime="2025-12-06 09:06:17.363688732 +0000 UTC m=+7739.765077602" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.526088 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.526158 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.713017 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 09:06:17 crc kubenswrapper[4895]: I1206 09:06:17.821416 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:19 crc kubenswrapper[4895]: I1206 09:06:19.304078 4895 generic.go:334] "Generic (PLEG): container finished" podID="6edbfe07-8c96-4516-9eb4-e499e9a060f2" containerID="03170f751ebc5a56f7e888d174a47cbb75edd0936a2e70c214be74c650bbb650" exitCode=0 Dec 06 09:06:19 crc kubenswrapper[4895]: I1206 09:06:19.304366 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5m5mq" event={"ID":"6edbfe07-8c96-4516-9eb4-e499e9a060f2","Type":"ContainerDied","Data":"03170f751ebc5a56f7e888d174a47cbb75edd0936a2e70c214be74c650bbb650"} Dec 06 09:06:19 crc kubenswrapper[4895]: I1206 09:06:19.339847 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.324679798 podStartE2EDuration="7.339825275s" podCreationTimestamp="2025-12-06 09:06:12 +0000 UTC" firstStartedPulling="2025-12-06 09:06:13.515769423 +0000 UTC m=+7735.917158283" lastFinishedPulling="2025-12-06 09:06:16.53091489 +0000 UTC m=+7738.932303760" observedRunningTime="2025-12-06 09:06:17.384785419 +0000 UTC m=+7739.786174289" watchObservedRunningTime="2025-12-06 09:06:19.339825275 +0000 UTC m=+7741.741214155" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.318319 4895 generic.go:334] "Generic (PLEG): container finished" podID="58a0d047-940d-4594-aff1-4a8e67fe8fdc" containerID="bd19d9c1e03905df237163a2c9568fef61091a12b6e2f8ab72035c50018f8279" exitCode=0 Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.318423 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" event={"ID":"58a0d047-940d-4594-aff1-4a8e67fe8fdc","Type":"ContainerDied","Data":"bd19d9c1e03905df237163a2c9568fef61091a12b6e2f8ab72035c50018f8279"} Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.668198 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.766052 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-scripts\") pod \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.766110 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-combined-ca-bundle\") pod \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.766205 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7l55\" (UniqueName: \"kubernetes.io/projected/6edbfe07-8c96-4516-9eb4-e499e9a060f2-kube-api-access-b7l55\") pod \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.766260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-config-data\") pod \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\" (UID: \"6edbfe07-8c96-4516-9eb4-e499e9a060f2\") " Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.771196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-scripts" (OuterVolumeSpecName: "scripts") pod "6edbfe07-8c96-4516-9eb4-e499e9a060f2" (UID: "6edbfe07-8c96-4516-9eb4-e499e9a060f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.774682 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edbfe07-8c96-4516-9eb4-e499e9a060f2-kube-api-access-b7l55" (OuterVolumeSpecName: "kube-api-access-b7l55") pod "6edbfe07-8c96-4516-9eb4-e499e9a060f2" (UID: "6edbfe07-8c96-4516-9eb4-e499e9a060f2"). InnerVolumeSpecName "kube-api-access-b7l55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.798176 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6edbfe07-8c96-4516-9eb4-e499e9a060f2" (UID: "6edbfe07-8c96-4516-9eb4-e499e9a060f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.799356 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-config-data" (OuterVolumeSpecName: "config-data") pod "6edbfe07-8c96-4516-9eb4-e499e9a060f2" (UID: "6edbfe07-8c96-4516-9eb4-e499e9a060f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.868862 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.868908 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.868932 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7l55\" (UniqueName: \"kubernetes.io/projected/6edbfe07-8c96-4516-9eb4-e499e9a060f2-kube-api-access-b7l55\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:20 crc kubenswrapper[4895]: I1206 09:06:20.868948 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edbfe07-8c96-4516-9eb4-e499e9a060f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.334831 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5m5mq" Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.334818 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5m5mq" event={"ID":"6edbfe07-8c96-4516-9eb4-e499e9a060f2","Type":"ContainerDied","Data":"f1385a067751dc131fb0fe2ceec0a77c0abccc66f3ca0a7a87654af73cdc6d35"} Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.334991 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1385a067751dc131fb0fe2ceec0a77c0abccc66f3ca0a7a87654af73cdc6d35" Dec 06 09:06:21 crc kubenswrapper[4895]: E1206 09:06:21.419332 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6edbfe07_8c96_4516_9eb4_e499e9a060f2.slice\": RecentStats: unable to find data in memory cache]" Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.535336 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.535931 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-log" containerID="cri-o://4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9" gracePeriod=30 Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.536010 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-api" containerID="cri-o://f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83" gracePeriod=30 Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.554510 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.554721 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" containerName="nova-scheduler-scheduler" containerID="cri-o://6c62373cc910fcc8a1bee2f95196bc9c8e38621de4778be3a23a93bad4675bfd" gracePeriod=30 Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.561788 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.561983 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-log" containerID="cri-o://c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1" gracePeriod=30 Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.562434 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-metadata" containerID="cri-o://0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa" gracePeriod=30 Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.975782 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.993796 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-scripts\") pod \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.993884 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hxmk\" (UniqueName: \"kubernetes.io/projected/58a0d047-940d-4594-aff1-4a8e67fe8fdc-kube-api-access-5hxmk\") pod \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.993991 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-combined-ca-bundle\") pod \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " Dec 06 09:06:21 crc kubenswrapper[4895]: I1206 09:06:21.994014 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-config-data\") pod \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\" (UID: \"58a0d047-940d-4594-aff1-4a8e67fe8fdc\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.015426 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a0d047-940d-4594-aff1-4a8e67fe8fdc-kube-api-access-5hxmk" (OuterVolumeSpecName: "kube-api-access-5hxmk") pod "58a0d047-940d-4594-aff1-4a8e67fe8fdc" (UID: "58a0d047-940d-4594-aff1-4a8e67fe8fdc"). InnerVolumeSpecName "kube-api-access-5hxmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.022697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-scripts" (OuterVolumeSpecName: "scripts") pod "58a0d047-940d-4594-aff1-4a8e67fe8fdc" (UID: "58a0d047-940d-4594-aff1-4a8e67fe8fdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.047257 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-config-data" (OuterVolumeSpecName: "config-data") pod "58a0d047-940d-4594-aff1-4a8e67fe8fdc" (UID: "58a0d047-940d-4594-aff1-4a8e67fe8fdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.068812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58a0d047-940d-4594-aff1-4a8e67fe8fdc" (UID: "58a0d047-940d-4594-aff1-4a8e67fe8fdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.095632 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.095661 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.095669 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a0d047-940d-4594-aff1-4a8e67fe8fdc-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.095678 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hxmk\" (UniqueName: \"kubernetes.io/projected/58a0d047-940d-4594-aff1-4a8e67fe8fdc-kube-api-access-5hxmk\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.155364 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.170677 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299044 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl7fd\" (UniqueName: \"kubernetes.io/projected/715d61f1-f150-4bf6-af77-d09bd2d956e2-kube-api-access-xl7fd\") pod \"715d61f1-f150-4bf6-af77-d09bd2d956e2\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299100 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-combined-ca-bundle\") pod \"715d61f1-f150-4bf6-af77-d09bd2d956e2\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-config-data\") pod \"b0d674af-bf62-42f8-8f23-107ce53459e2\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299327 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcmgk\" (UniqueName: \"kubernetes.io/projected/b0d674af-bf62-42f8-8f23-107ce53459e2-kube-api-access-gcmgk\") pod \"b0d674af-bf62-42f8-8f23-107ce53459e2\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299452 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-combined-ca-bundle\") pod \"b0d674af-bf62-42f8-8f23-107ce53459e2\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299525 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d674af-bf62-42f8-8f23-107ce53459e2-logs\") pod \"b0d674af-bf62-42f8-8f23-107ce53459e2\" (UID: \"b0d674af-bf62-42f8-8f23-107ce53459e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299583 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-config-data\") pod \"715d61f1-f150-4bf6-af77-d09bd2d956e2\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.299705 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715d61f1-f150-4bf6-af77-d09bd2d956e2-logs\") pod \"715d61f1-f150-4bf6-af77-d09bd2d956e2\" (UID: \"715d61f1-f150-4bf6-af77-d09bd2d956e2\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.300369 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715d61f1-f150-4bf6-af77-d09bd2d956e2-logs" (OuterVolumeSpecName: "logs") pod "715d61f1-f150-4bf6-af77-d09bd2d956e2" (UID: "715d61f1-f150-4bf6-af77-d09bd2d956e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.300548 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d674af-bf62-42f8-8f23-107ce53459e2-logs" (OuterVolumeSpecName: "logs") pod "b0d674af-bf62-42f8-8f23-107ce53459e2" (UID: "b0d674af-bf62-42f8-8f23-107ce53459e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.303281 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715d61f1-f150-4bf6-af77-d09bd2d956e2-kube-api-access-xl7fd" (OuterVolumeSpecName: "kube-api-access-xl7fd") pod "715d61f1-f150-4bf6-af77-d09bd2d956e2" (UID: "715d61f1-f150-4bf6-af77-d09bd2d956e2"). InnerVolumeSpecName "kube-api-access-xl7fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.308226 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d674af-bf62-42f8-8f23-107ce53459e2-kube-api-access-gcmgk" (OuterVolumeSpecName: "kube-api-access-gcmgk") pod "b0d674af-bf62-42f8-8f23-107ce53459e2" (UID: "b0d674af-bf62-42f8-8f23-107ce53459e2"). InnerVolumeSpecName "kube-api-access-gcmgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.323300 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-config-data" (OuterVolumeSpecName: "config-data") pod "715d61f1-f150-4bf6-af77-d09bd2d956e2" (UID: "715d61f1-f150-4bf6-af77-d09bd2d956e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.324313 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-config-data" (OuterVolumeSpecName: "config-data") pod "b0d674af-bf62-42f8-8f23-107ce53459e2" (UID: "b0d674af-bf62-42f8-8f23-107ce53459e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.330303 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0d674af-bf62-42f8-8f23-107ce53459e2" (UID: "b0d674af-bf62-42f8-8f23-107ce53459e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.330602 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "715d61f1-f150-4bf6-af77-d09bd2d956e2" (UID: "715d61f1-f150-4bf6-af77-d09bd2d956e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346128 4895 generic.go:334] "Generic (PLEG): container finished" podID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerID="f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83" exitCode=0 Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346179 4895 generic.go:334] "Generic (PLEG): container finished" podID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerID="4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9" exitCode=143 Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346189 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346200 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d674af-bf62-42f8-8f23-107ce53459e2","Type":"ContainerDied","Data":"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d674af-bf62-42f8-8f23-107ce53459e2","Type":"ContainerDied","Data":"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d674af-bf62-42f8-8f23-107ce53459e2","Type":"ContainerDied","Data":"574637e2a26f3cad531a5304f429e40fee55c62e1104329af88f496e8a5ef873"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.346261 4895 scope.go:117] "RemoveContainer" containerID="f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.353254 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.353251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mrqc6" event={"ID":"58a0d047-940d-4594-aff1-4a8e67fe8fdc","Type":"ContainerDied","Data":"1d75dd0decf913c58e33e07df1eccde24cb727b5fd4be3a21b7fa5b1d33bb496"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.358756 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d75dd0decf913c58e33e07df1eccde24cb727b5fd4be3a21b7fa5b1d33bb496" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.364079 4895 generic.go:334] "Generic (PLEG): container finished" podID="5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" containerID="6c62373cc910fcc8a1bee2f95196bc9c8e38621de4778be3a23a93bad4675bfd" exitCode=0 Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.364182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc","Type":"ContainerDied","Data":"6c62373cc910fcc8a1bee2f95196bc9c8e38621de4778be3a23a93bad4675bfd"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.373623 4895 generic.go:334] "Generic (PLEG): container finished" podID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerID="0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa" exitCode=0 Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.373679 4895 generic.go:334] "Generic (PLEG): container finished" podID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerID="c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1" exitCode=143 Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.373717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"715d61f1-f150-4bf6-af77-d09bd2d956e2","Type":"ContainerDied","Data":"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.373763 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"715d61f1-f150-4bf6-af77-d09bd2d956e2","Type":"ContainerDied","Data":"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.373776 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"715d61f1-f150-4bf6-af77-d09bd2d956e2","Type":"ContainerDied","Data":"b341ae17495929d4f99ee75097ee02cda924e57ef2b846584a3d5a769ed13877"} Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.375848 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.396548 4895 scope.go:117] "RemoveContainer" containerID="4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403156 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715d61f1-f150-4bf6-af77-d09bd2d956e2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403200 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl7fd\" (UniqueName: \"kubernetes.io/projected/715d61f1-f150-4bf6-af77-d09bd2d956e2-kube-api-access-xl7fd\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403223 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403240 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403252 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcmgk\" (UniqueName: \"kubernetes.io/projected/b0d674af-bf62-42f8-8f23-107ce53459e2-kube-api-access-gcmgk\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403261 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d674af-bf62-42f8-8f23-107ce53459e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403275 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d674af-bf62-42f8-8f23-107ce53459e2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.403286 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715d61f1-f150-4bf6-af77-d09bd2d956e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.445488 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.459173 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.459589 4895 scope.go:117] "RemoveContainer" containerID="f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.460134 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83\": container with ID starting with f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83 not found: ID does not exist" containerID="f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.460196 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83"} err="failed to get container status \"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83\": rpc error: code = NotFound desc = could not find container \"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83\": container with ID starting with f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83 not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.460224 4895 scope.go:117] "RemoveContainer" containerID="4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.460793 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9\": container with ID starting with 4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9 not found: ID does not exist" containerID="4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.460826 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9"} err="failed to get container status \"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9\": rpc error: code = NotFound desc = could not find container \"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9\": container with ID starting with 4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9 not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.460840 4895 scope.go:117] "RemoveContainer" containerID="f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.472543 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83"} err="failed to get container status \"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83\": rpc error: code = NotFound desc = could not find container \"f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83\": container with ID starting with f01781a5c70b9b06aa1198d5471a1983692a2b8e588ffcae2e03721a81298e83 not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.472624 4895 scope.go:117] "RemoveContainer" containerID="4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.473025 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9"} err="failed to get container status \"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9\": rpc error: code = NotFound desc = could not find container \"4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9\": container with ID starting with 4837cdb6df15a693795cb95a108b6f5f3d0107ce66d420b690837c98e295acc9 not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.473040 4895 scope.go:117] "RemoveContainer" containerID="0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.494235 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.504841 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.509380 4895 scope.go:117] "RemoveContainer" containerID="c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510817 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-log" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510840 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-log" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510860 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-api" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510866 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-api" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510879 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-log" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510885 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-log" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510901 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a0d047-940d-4594-aff1-4a8e67fe8fdc" containerName="nova-cell1-conductor-db-sync" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510907 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a0d047-940d-4594-aff1-4a8e67fe8fdc" containerName="nova-cell1-conductor-db-sync" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510920 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edbfe07-8c96-4516-9eb4-e499e9a060f2" containerName="nova-manage" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510925 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edbfe07-8c96-4516-9eb4-e499e9a060f2" containerName="nova-manage" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510938 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" containerName="nova-scheduler-scheduler" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510946 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" containerName="nova-scheduler-scheduler" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.510962 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-metadata" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.510968 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-metadata" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511226 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-log" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511244 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" containerName="nova-api-api" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511252 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6edbfe07-8c96-4516-9eb4-e499e9a060f2" containerName="nova-manage" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511260 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" containerName="nova-scheduler-scheduler" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511269 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-metadata" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511277 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" containerName="nova-metadata-log" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.511286 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a0d047-940d-4594-aff1-4a8e67fe8fdc" containerName="nova-cell1-conductor-db-sync" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.517796 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.521766 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.527671 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.529711 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.535972 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.552369 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.557942 4895 scope.go:117] "RemoveContainer" containerID="0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.558413 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa\": container with ID starting with 0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa not found: ID does not exist" containerID="0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.558443 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa"} err="failed to get container status \"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa\": rpc error: code = NotFound desc = could not find container \"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa\": container with ID starting with 0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.558469 4895 scope.go:117] "RemoveContainer" containerID="c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1" Dec 06 09:06:22 crc kubenswrapper[4895]: E1206 09:06:22.558828 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1\": container with ID starting with c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1 not found: ID does not exist" containerID="c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.558893 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1"} err="failed to get container status \"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1\": rpc error: code = NotFound desc = could not find container \"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1\": container with ID starting with c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1 not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.558917 4895 scope.go:117] "RemoveContainer" containerID="0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.562510 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa"} err="failed to get container status \"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa\": rpc error: code = NotFound desc = could not find container \"0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa\": container with ID starting with 0cbe7e74878f8e675e043c762e7018988a95bef160da2770881841154c5cc1fa not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.562553 4895 scope.go:117] "RemoveContainer" containerID="c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.563062 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1"} err="failed to get container status \"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1\": rpc error: code = NotFound desc = could not find container \"c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1\": container with ID starting with c3af79aa598714371745ed2362c1d4035dfd35453ee27a81fd7a3211cd26f8b1 not found: ID does not exist" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.571059 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.578619 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.586300 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.592713 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.594522 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.596342 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.606017 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chrjp\" (UniqueName: \"kubernetes.io/projected/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-kube-api-access-chrjp\") pod \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.606115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-combined-ca-bundle\") pod \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.606172 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-config-data\") pod \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\" (UID: \"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc\") " Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.610843 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-kube-api-access-chrjp" (OuterVolumeSpecName: "kube-api-access-chrjp") pod "5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" (UID: "5e4b588d-3ad0-49bd-8f84-aef03fc71bfc"). InnerVolumeSpecName "kube-api-access-chrjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.611153 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.630182 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" (UID: "5e4b588d-3ad0-49bd-8f84-aef03fc71bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.630840 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-config-data" (OuterVolumeSpecName: "config-data") pod "5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" (UID: "5e4b588d-3ad0-49bd-8f84-aef03fc71bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708236 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d246a4c-8b90-4173-87e2-03299f8f2196-logs\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708292 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708321 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-config-data\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708346 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4fkf\" (UniqueName: \"kubernetes.io/projected/b744de33-30ad-4367-aa56-0c683d87b925-kube-api-access-m4fkf\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708368 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-config-data\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708424 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c273580-a161-44a8-8b56-e0c4a01cadce-logs\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708444 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx7m2\" (UniqueName: \"kubernetes.io/projected/2c273580-a161-44a8-8b56-e0c4a01cadce-kube-api-access-rx7m2\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708774 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.708875 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjxq\" (UniqueName: \"kubernetes.io/projected/1d246a4c-8b90-4173-87e2-03299f8f2196-kube-api-access-jrjxq\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.709306 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.709387 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.709452 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chrjp\" (UniqueName: \"kubernetes.io/projected/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc-kube-api-access-chrjp\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.810805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.810852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c273580-a161-44a8-8b56-e0c4a01cadce-logs\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.810880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx7m2\" (UniqueName: \"kubernetes.io/projected/2c273580-a161-44a8-8b56-e0c4a01cadce-kube-api-access-rx7m2\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.810907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.810932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.810946 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjxq\" (UniqueName: \"kubernetes.io/projected/1d246a4c-8b90-4173-87e2-03299f8f2196-kube-api-access-jrjxq\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.811001 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d246a4c-8b90-4173-87e2-03299f8f2196-logs\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.811024 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.811050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-config-data\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.811071 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4fkf\" (UniqueName: \"kubernetes.io/projected/b744de33-30ad-4367-aa56-0c683d87b925-kube-api-access-m4fkf\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.811089 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-config-data\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.812408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d246a4c-8b90-4173-87e2-03299f8f2196-logs\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.814267 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c273580-a161-44a8-8b56-e0c4a01cadce-logs\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.818859 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-config-data\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.819339 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.820168 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.822785 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.822948 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.823285 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.831795 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-config-data\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.837520 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4fkf\" (UniqueName: \"kubernetes.io/projected/b744de33-30ad-4367-aa56-0c683d87b925-kube-api-access-m4fkf\") pod \"nova-cell1-conductor-0\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.839769 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx7m2\" (UniqueName: \"kubernetes.io/projected/2c273580-a161-44a8-8b56-e0c4a01cadce-kube-api-access-rx7m2\") pod \"nova-metadata-0\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.844689 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjxq\" (UniqueName: \"kubernetes.io/projected/1d246a4c-8b90-4173-87e2-03299f8f2196-kube-api-access-jrjxq\") pod \"nova-api-0\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.846349 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.856614 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.858584 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.874426 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.913152 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.996531 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff5d4687-cbsc6"] Dec 06 09:06:22 crc kubenswrapper[4895]: I1206 09:06:22.996920 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="dnsmasq-dns" containerID="cri-o://3df3af1f372e58295549dffc35932253860465b0a91f62995c3e4a5b8b9eb924" gracePeriod=10 Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.104605 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.64:5353: connect: connection refused" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.395508 4895 generic.go:334] "Generic (PLEG): container finished" podID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerID="3df3af1f372e58295549dffc35932253860465b0a91f62995c3e4a5b8b9eb924" exitCode=0 Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.395755 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" event={"ID":"e41a06e0-4b8a-473a-8eb9-5681761909f2","Type":"ContainerDied","Data":"3df3af1f372e58295549dffc35932253860465b0a91f62995c3e4a5b8b9eb924"} Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.401102 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.403917 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e4b588d-3ad0-49bd-8f84-aef03fc71bfc","Type":"ContainerDied","Data":"5a3c6391b5971e118b8b6c14b7074190550b26d45ab249886265f0e87220f472"} Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.403995 4895 scope.go:117] "RemoveContainer" containerID="6c62373cc910fcc8a1bee2f95196bc9c8e38621de4778be3a23a93bad4675bfd" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.418629 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.506069 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.526840 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.540323 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.563060 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.580541 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.594023 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.596035 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.598418 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.621603 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.639444 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.639789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jzg\" (UniqueName: \"kubernetes.io/projected/559a97b3-560f-477b-b92a-e3b611a40713-kube-api-access-f5jzg\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.639947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-config-data\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.642677 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.742142 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-nb\") pod \"e41a06e0-4b8a-473a-8eb9-5681761909f2\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.742596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-config\") pod \"e41a06e0-4b8a-473a-8eb9-5681761909f2\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.742718 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-dns-svc\") pod \"e41a06e0-4b8a-473a-8eb9-5681761909f2\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.742774 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-sb\") pod \"e41a06e0-4b8a-473a-8eb9-5681761909f2\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.742962 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmq2s\" (UniqueName: \"kubernetes.io/projected/e41a06e0-4b8a-473a-8eb9-5681761909f2-kube-api-access-lmq2s\") pod \"e41a06e0-4b8a-473a-8eb9-5681761909f2\" (UID: \"e41a06e0-4b8a-473a-8eb9-5681761909f2\") " Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.743604 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-config-data\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.743644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5jzg\" (UniqueName: \"kubernetes.io/projected/559a97b3-560f-477b-b92a-e3b611a40713-kube-api-access-f5jzg\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.743761 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.748607 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-config-data\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.750274 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41a06e0-4b8a-473a-8eb9-5681761909f2-kube-api-access-lmq2s" (OuterVolumeSpecName: "kube-api-access-lmq2s") pod "e41a06e0-4b8a-473a-8eb9-5681761909f2" (UID: "e41a06e0-4b8a-473a-8eb9-5681761909f2"). InnerVolumeSpecName "kube-api-access-lmq2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.750973 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.809429 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5jzg\" (UniqueName: \"kubernetes.io/projected/559a97b3-560f-477b-b92a-e3b611a40713-kube-api-access-f5jzg\") pod \"nova-scheduler-0\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.850463 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmq2s\" (UniqueName: \"kubernetes.io/projected/e41a06e0-4b8a-473a-8eb9-5681761909f2-kube-api-access-lmq2s\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.875412 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e41a06e0-4b8a-473a-8eb9-5681761909f2" (UID: "e41a06e0-4b8a-473a-8eb9-5681761909f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.879662 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-config" (OuterVolumeSpecName: "config") pod "e41a06e0-4b8a-473a-8eb9-5681761909f2" (UID: "e41a06e0-4b8a-473a-8eb9-5681761909f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.882389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e41a06e0-4b8a-473a-8eb9-5681761909f2" (UID: "e41a06e0-4b8a-473a-8eb9-5681761909f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.902863 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e41a06e0-4b8a-473a-8eb9-5681761909f2" (UID: "e41a06e0-4b8a-473a-8eb9-5681761909f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.938556 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.951696 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.951748 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.951763 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:23 crc kubenswrapper[4895]: I1206 09:06:23.951778 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41a06e0-4b8a-473a-8eb9-5681761909f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.116889 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4b588d-3ad0-49bd-8f84-aef03fc71bfc" path="/var/lib/kubelet/pods/5e4b588d-3ad0-49bd-8f84-aef03fc71bfc/volumes" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.118975 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715d61f1-f150-4bf6-af77-d09bd2d956e2" path="/var/lib/kubelet/pods/715d61f1-f150-4bf6-af77-d09bd2d956e2/volumes" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.120120 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d674af-bf62-42f8-8f23-107ce53459e2" path="/var/lib/kubelet/pods/b0d674af-bf62-42f8-8f23-107ce53459e2/volumes" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.431877 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.432242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" event={"ID":"e41a06e0-4b8a-473a-8eb9-5681761909f2","Type":"ContainerDied","Data":"ca75a68b98be4334755c0e35396d598b66772dba85205c39f6dda7c4d59ec8f0"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.431961 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff5d4687-cbsc6" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.432280 4895 scope.go:117] "RemoveContainer" containerID="3df3af1f372e58295549dffc35932253860465b0a91f62995c3e4a5b8b9eb924" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.434849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b744de33-30ad-4367-aa56-0c683d87b925","Type":"ContainerStarted","Data":"32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.434930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b744de33-30ad-4367-aa56-0c683d87b925","Type":"ContainerStarted","Data":"b16e40496d3b5c4ec970cb2e5173912bd8ba77daea57a25f0fac60065b9d359e"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.435114 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:24 crc kubenswrapper[4895]: W1206 09:06:24.440600 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod559a97b3_560f_477b_b92a_e3b611a40713.slice/crio-8d36504946ca1b7c726ce2f0b3c8a46e2bc5934db0c6c10649f04dd13756b595 WatchSource:0}: Error finding container 8d36504946ca1b7c726ce2f0b3c8a46e2bc5934db0c6c10649f04dd13756b595: Status 404 returned error can't find the container with id 8d36504946ca1b7c726ce2f0b3c8a46e2bc5934db0c6c10649f04dd13756b595 Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.442305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c273580-a161-44a8-8b56-e0c4a01cadce","Type":"ContainerStarted","Data":"638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.442368 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c273580-a161-44a8-8b56-e0c4a01cadce","Type":"ContainerStarted","Data":"775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.442383 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c273580-a161-44a8-8b56-e0c4a01cadce","Type":"ContainerStarted","Data":"236ddb828f7040ec95d35d1c79634f8accfec0164d2f00a521b79893bfdd8805"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.448573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1d246a4c-8b90-4173-87e2-03299f8f2196","Type":"ContainerStarted","Data":"6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.448612 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1d246a4c-8b90-4173-87e2-03299f8f2196","Type":"ContainerStarted","Data":"953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.448623 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1d246a4c-8b90-4173-87e2-03299f8f2196","Type":"ContainerStarted","Data":"eadf3c7af7fc4b91382a404da8788e6ef81b84bd5d50cb62040ccf4c934332c8"} Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.455298 4895 scope.go:117] "RemoveContainer" containerID="5649b679f0f020742fce6be012928a71cd0d285569df520e791ad16884c73e65" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.462197 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.462154943 podStartE2EDuration="2.462154943s" podCreationTimestamp="2025-12-06 09:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:24.451763154 +0000 UTC m=+7746.853152044" watchObservedRunningTime="2025-12-06 09:06:24.462154943 +0000 UTC m=+7746.863543823" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.485422 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.485403688 podStartE2EDuration="2.485403688s" podCreationTimestamp="2025-12-06 09:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:24.475662356 +0000 UTC m=+7746.877051226" watchObservedRunningTime="2025-12-06 09:06:24.485403688 +0000 UTC m=+7746.886792558" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.515382 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5153533919999997 podStartE2EDuration="2.515353392s" podCreationTimestamp="2025-12-06 09:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:24.49815541 +0000 UTC m=+7746.899544300" watchObservedRunningTime="2025-12-06 09:06:24.515353392 +0000 UTC m=+7746.916742272" Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.531102 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff5d4687-cbsc6"] Dec 06 09:06:24 crc kubenswrapper[4895]: I1206 09:06:24.540304 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76ff5d4687-cbsc6"] Dec 06 09:06:25 crc kubenswrapper[4895]: I1206 09:06:25.466744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"559a97b3-560f-477b-b92a-e3b611a40713","Type":"ContainerStarted","Data":"bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b"} Dec 06 09:06:25 crc kubenswrapper[4895]: I1206 09:06:25.466845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"559a97b3-560f-477b-b92a-e3b611a40713","Type":"ContainerStarted","Data":"8d36504946ca1b7c726ce2f0b3c8a46e2bc5934db0c6c10649f04dd13756b595"} Dec 06 09:06:25 crc kubenswrapper[4895]: I1206 09:06:25.497856 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.497834678 podStartE2EDuration="2.497834678s" podCreationTimestamp="2025-12-06 09:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:25.48602622 +0000 UTC m=+7747.887415090" watchObservedRunningTime="2025-12-06 09:06:25.497834678 +0000 UTC m=+7747.899223558" Dec 06 09:06:26 crc kubenswrapper[4895]: I1206 09:06:26.068688 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" path="/var/lib/kubelet/pods/e41a06e0-4b8a-473a-8eb9-5681761909f2/volumes" Dec 06 09:06:27 crc kubenswrapper[4895]: I1206 09:06:27.913523 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:06:27 crc kubenswrapper[4895]: I1206 09:06:27.913890 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:06:28 crc kubenswrapper[4895]: I1206 09:06:28.939844 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 09:06:29 crc kubenswrapper[4895]: I1206 09:06:29.695907 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:06:29 crc kubenswrapper[4895]: I1206 09:06:29.696013 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:06:32 crc kubenswrapper[4895]: I1206 09:06:32.858980 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:06:32 crc kubenswrapper[4895]: I1206 09:06:32.859783 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:06:32 crc kubenswrapper[4895]: I1206 09:06:32.913716 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:06:32 crc kubenswrapper[4895]: I1206 09:06:32.913767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:06:32 crc kubenswrapper[4895]: I1206 09:06:32.922430 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.391763 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xh7zd"] Dec 06 09:06:33 crc kubenswrapper[4895]: E1206 09:06:33.392128 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="dnsmasq-dns" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.392145 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="dnsmasq-dns" Dec 06 09:06:33 crc kubenswrapper[4895]: E1206 09:06:33.392161 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="init" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.392167 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="init" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.392348 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41a06e0-4b8a-473a-8eb9-5681761909f2" containerName="dnsmasq-dns" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.392967 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.397026 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.398158 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.412146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh7zd"] Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.560535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-scripts\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.560970 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-config-data\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.561168 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.561356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlrg\" (UniqueName: \"kubernetes.io/projected/1cb73be7-9428-40ee-826b-f1af8a0c1838-kube-api-access-znlrg\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.662837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znlrg\" (UniqueName: \"kubernetes.io/projected/1cb73be7-9428-40ee-826b-f1af8a0c1838-kube-api-access-znlrg\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.662911 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-scripts\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.662959 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-config-data\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.662999 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.671011 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-scripts\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.671162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.671551 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-config-data\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.686944 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znlrg\" (UniqueName: \"kubernetes.io/projected/1cb73be7-9428-40ee-826b-f1af8a0c1838-kube-api-access-znlrg\") pod \"nova-cell1-cell-mapping-xh7zd\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.714778 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.940646 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.941674 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.942033 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:33 crc kubenswrapper[4895]: I1206 09:06:33.990040 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.031617 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.84:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.031914 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.84:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.209040 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh7zd"] Dec 06 09:06:34 crc kubenswrapper[4895]: W1206 09:06:34.214753 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cb73be7_9428_40ee_826b_f1af8a0c1838.slice/crio-808d347b5df7359b74041300834a9d7c5e1aaa3726c964cfe97a50fb6a505cb5 WatchSource:0}: Error finding container 808d347b5df7359b74041300834a9d7c5e1aaa3726c964cfe97a50fb6a505cb5: Status 404 returned error can't find the container with id 808d347b5df7359b74041300834a9d7c5e1aaa3726c964cfe97a50fb6a505cb5 Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.579581 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh7zd" event={"ID":"1cb73be7-9428-40ee-826b-f1af8a0c1838","Type":"ContainerStarted","Data":"ed4a8d5a4d75eab2c5dc35b2869a835da7cecba5dd0c664cd363e91c056df55f"} Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.579646 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh7zd" event={"ID":"1cb73be7-9428-40ee-826b-f1af8a0c1838","Type":"ContainerStarted","Data":"808d347b5df7359b74041300834a9d7c5e1aaa3726c964cfe97a50fb6a505cb5"} Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.613622 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xh7zd" podStartSLOduration=1.613593234 podStartE2EDuration="1.613593234s" podCreationTimestamp="2025-12-06 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:34.598726815 +0000 UTC m=+7757.000115685" watchObservedRunningTime="2025-12-06 09:06:34.613593234 +0000 UTC m=+7757.014982134" Dec 06 09:06:34 crc kubenswrapper[4895]: I1206 09:06:34.619868 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 09:06:39 crc kubenswrapper[4895]: I1206 09:06:39.639960 4895 generic.go:334] "Generic (PLEG): container finished" podID="1cb73be7-9428-40ee-826b-f1af8a0c1838" containerID="ed4a8d5a4d75eab2c5dc35b2869a835da7cecba5dd0c664cd363e91c056df55f" exitCode=0 Dec 06 09:06:39 crc kubenswrapper[4895]: I1206 09:06:39.640038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh7zd" event={"ID":"1cb73be7-9428-40ee-826b-f1af8a0c1838","Type":"ContainerDied","Data":"ed4a8d5a4d75eab2c5dc35b2869a835da7cecba5dd0c664cd363e91c056df55f"} Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.065167 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.133545 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-scripts\") pod \"1cb73be7-9428-40ee-826b-f1af8a0c1838\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.133672 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-config-data\") pod \"1cb73be7-9428-40ee-826b-f1af8a0c1838\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.133723 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znlrg\" (UniqueName: \"kubernetes.io/projected/1cb73be7-9428-40ee-826b-f1af8a0c1838-kube-api-access-znlrg\") pod \"1cb73be7-9428-40ee-826b-f1af8a0c1838\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.133785 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-combined-ca-bundle\") pod \"1cb73be7-9428-40ee-826b-f1af8a0c1838\" (UID: \"1cb73be7-9428-40ee-826b-f1af8a0c1838\") " Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.140862 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb73be7-9428-40ee-826b-f1af8a0c1838-kube-api-access-znlrg" (OuterVolumeSpecName: "kube-api-access-znlrg") pod "1cb73be7-9428-40ee-826b-f1af8a0c1838" (UID: "1cb73be7-9428-40ee-826b-f1af8a0c1838"). InnerVolumeSpecName "kube-api-access-znlrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.141375 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-scripts" (OuterVolumeSpecName: "scripts") pod "1cb73be7-9428-40ee-826b-f1af8a0c1838" (UID: "1cb73be7-9428-40ee-826b-f1af8a0c1838"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.166035 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cb73be7-9428-40ee-826b-f1af8a0c1838" (UID: "1cb73be7-9428-40ee-826b-f1af8a0c1838"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.185993 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-config-data" (OuterVolumeSpecName: "config-data") pod "1cb73be7-9428-40ee-826b-f1af8a0c1838" (UID: "1cb73be7-9428-40ee-826b-f1af8a0c1838"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.235382 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.235420 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znlrg\" (UniqueName: \"kubernetes.io/projected/1cb73be7-9428-40ee-826b-f1af8a0c1838-kube-api-access-znlrg\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.235432 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.235441 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb73be7-9428-40ee-826b-f1af8a0c1838-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.660101 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh7zd" event={"ID":"1cb73be7-9428-40ee-826b-f1af8a0c1838","Type":"ContainerDied","Data":"808d347b5df7359b74041300834a9d7c5e1aaa3726c964cfe97a50fb6a505cb5"} Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.660140 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh7zd" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.660148 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808d347b5df7359b74041300834a9d7c5e1aaa3726c964cfe97a50fb6a505cb5" Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.886503 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.886855 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-log" containerID="cri-o://953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e" gracePeriod=30 Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.886905 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-api" containerID="cri-o://6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd" gracePeriod=30 Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.896395 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.897037 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="559a97b3-560f-477b-b92a-e3b611a40713" containerName="nova-scheduler-scheduler" containerID="cri-o://bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" gracePeriod=30 Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.935939 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.936207 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-log" containerID="cri-o://775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae" gracePeriod=30 Dec 06 09:06:41 crc kubenswrapper[4895]: I1206 09:06:41.936358 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-metadata" containerID="cri-o://638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da" gracePeriod=30 Dec 06 09:06:41 crc kubenswrapper[4895]: E1206 09:06:41.993313 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d246a4c_8b90_4173_87e2_03299f8f2196.slice/crio-953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e.scope\": RecentStats: unable to find data in memory cache]" Dec 06 09:06:42 crc kubenswrapper[4895]: I1206 09:06:42.674539 4895 generic.go:334] "Generic (PLEG): container finished" podID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerID="775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae" exitCode=143 Dec 06 09:06:42 crc kubenswrapper[4895]: I1206 09:06:42.674625 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c273580-a161-44a8-8b56-e0c4a01cadce","Type":"ContainerDied","Data":"775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae"} Dec 06 09:06:42 crc kubenswrapper[4895]: I1206 09:06:42.677455 4895 generic.go:334] "Generic (PLEG): container finished" podID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerID="953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e" exitCode=143 Dec 06 09:06:42 crc kubenswrapper[4895]: I1206 09:06:42.677494 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1d246a4c-8b90-4173-87e2-03299f8f2196","Type":"ContainerDied","Data":"953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e"} Dec 06 09:06:43 crc kubenswrapper[4895]: E1206 09:06:43.943272 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:06:43 crc kubenswrapper[4895]: E1206 09:06:43.946046 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:06:43 crc kubenswrapper[4895]: E1206 09:06:43.947990 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:06:43 crc kubenswrapper[4895]: E1206 09:06:43.948089 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="559a97b3-560f-477b-b92a-e3b611a40713" containerName="nova-scheduler-scheduler" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.558396 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.564024 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.706631 4895 generic.go:334] "Generic (PLEG): container finished" podID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerID="638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da" exitCode=0 Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.706711 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.706703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c273580-a161-44a8-8b56-e0c4a01cadce","Type":"ContainerDied","Data":"638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da"} Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.707169 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c273580-a161-44a8-8b56-e0c4a01cadce","Type":"ContainerDied","Data":"236ddb828f7040ec95d35d1c79634f8accfec0164d2f00a521b79893bfdd8805"} Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.707193 4895 scope.go:117] "RemoveContainer" containerID="638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.710892 4895 generic.go:334] "Generic (PLEG): container finished" podID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerID="6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd" exitCode=0 Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.710939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1d246a4c-8b90-4173-87e2-03299f8f2196","Type":"ContainerDied","Data":"6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd"} Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.710989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1d246a4c-8b90-4173-87e2-03299f8f2196","Type":"ContainerDied","Data":"eadf3c7af7fc4b91382a404da8788e6ef81b84bd5d50cb62040ccf4c934332c8"} Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.710949 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-combined-ca-bundle\") pod \"1d246a4c-8b90-4173-87e2-03299f8f2196\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724601 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d246a4c-8b90-4173-87e2-03299f8f2196-logs\") pod \"1d246a4c-8b90-4173-87e2-03299f8f2196\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-combined-ca-bundle\") pod \"2c273580-a161-44a8-8b56-e0c4a01cadce\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724721 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c273580-a161-44a8-8b56-e0c4a01cadce-logs\") pod \"2c273580-a161-44a8-8b56-e0c4a01cadce\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724861 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx7m2\" (UniqueName: \"kubernetes.io/projected/2c273580-a161-44a8-8b56-e0c4a01cadce-kube-api-access-rx7m2\") pod \"2c273580-a161-44a8-8b56-e0c4a01cadce\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjxq\" (UniqueName: \"kubernetes.io/projected/1d246a4c-8b90-4173-87e2-03299f8f2196-kube-api-access-jrjxq\") pod \"1d246a4c-8b90-4173-87e2-03299f8f2196\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724937 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-config-data\") pod \"2c273580-a161-44a8-8b56-e0c4a01cadce\" (UID: \"2c273580-a161-44a8-8b56-e0c4a01cadce\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.724972 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-config-data\") pod \"1d246a4c-8b90-4173-87e2-03299f8f2196\" (UID: \"1d246a4c-8b90-4173-87e2-03299f8f2196\") " Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.725277 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d246a4c-8b90-4173-87e2-03299f8f2196-logs" (OuterVolumeSpecName: "logs") pod "1d246a4c-8b90-4173-87e2-03299f8f2196" (UID: "1d246a4c-8b90-4173-87e2-03299f8f2196"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.725763 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c273580-a161-44a8-8b56-e0c4a01cadce-logs" (OuterVolumeSpecName: "logs") pod "2c273580-a161-44a8-8b56-e0c4a01cadce" (UID: "2c273580-a161-44a8-8b56-e0c4a01cadce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.742028 4895 scope.go:117] "RemoveContainer" containerID="775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.742128 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d246a4c-8b90-4173-87e2-03299f8f2196-kube-api-access-jrjxq" (OuterVolumeSpecName: "kube-api-access-jrjxq") pod "1d246a4c-8b90-4173-87e2-03299f8f2196" (UID: "1d246a4c-8b90-4173-87e2-03299f8f2196"). InnerVolumeSpecName "kube-api-access-jrjxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.742196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c273580-a161-44a8-8b56-e0c4a01cadce-kube-api-access-rx7m2" (OuterVolumeSpecName: "kube-api-access-rx7m2") pod "2c273580-a161-44a8-8b56-e0c4a01cadce" (UID: "2c273580-a161-44a8-8b56-e0c4a01cadce"). InnerVolumeSpecName "kube-api-access-rx7m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.755982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c273580-a161-44a8-8b56-e0c4a01cadce" (UID: "2c273580-a161-44a8-8b56-e0c4a01cadce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.756158 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-config-data" (OuterVolumeSpecName: "config-data") pod "1d246a4c-8b90-4173-87e2-03299f8f2196" (UID: "1d246a4c-8b90-4173-87e2-03299f8f2196"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.759397 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-config-data" (OuterVolumeSpecName: "config-data") pod "2c273580-a161-44a8-8b56-e0c4a01cadce" (UID: "2c273580-a161-44a8-8b56-e0c4a01cadce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.760013 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d246a4c-8b90-4173-87e2-03299f8f2196" (UID: "1d246a4c-8b90-4173-87e2-03299f8f2196"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827160 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827373 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d246a4c-8b90-4173-87e2-03299f8f2196-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827462 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827639 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c273580-a161-44a8-8b56-e0c4a01cadce-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827718 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx7m2\" (UniqueName: \"kubernetes.io/projected/2c273580-a161-44a8-8b56-e0c4a01cadce-kube-api-access-rx7m2\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827795 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjxq\" (UniqueName: \"kubernetes.io/projected/1d246a4c-8b90-4173-87e2-03299f8f2196-kube-api-access-jrjxq\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827866 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c273580-a161-44a8-8b56-e0c4a01cadce-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.827935 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d246a4c-8b90-4173-87e2-03299f8f2196-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.852956 4895 scope.go:117] "RemoveContainer" containerID="638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da" Dec 06 09:06:45 crc kubenswrapper[4895]: E1206 09:06:45.853910 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da\": container with ID starting with 638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da not found: ID does not exist" containerID="638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.854290 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da"} err="failed to get container status \"638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da\": rpc error: code = NotFound desc = could not find container \"638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da\": container with ID starting with 638b93bee941bf5dae8a630fdfaf3704d048ac7f8ab9ef1bcbbadbb656b1b7da not found: ID does not exist" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.854384 4895 scope.go:117] "RemoveContainer" containerID="775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae" Dec 06 09:06:45 crc kubenswrapper[4895]: E1206 09:06:45.855315 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae\": container with ID starting with 775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae not found: ID does not exist" containerID="775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.855351 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae"} err="failed to get container status \"775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae\": rpc error: code = NotFound desc = could not find container \"775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae\": container with ID starting with 775f0e66e526e43b715dc1f2e97da26708212a878eee5a852e452f85c38e04ae not found: ID does not exist" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.855373 4895 scope.go:117] "RemoveContainer" containerID="6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.878830 4895 scope.go:117] "RemoveContainer" containerID="953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.896906 4895 scope.go:117] "RemoveContainer" containerID="6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd" Dec 06 09:06:45 crc kubenswrapper[4895]: E1206 09:06:45.897452 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd\": container with ID starting with 6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd not found: ID does not exist" containerID="6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.897527 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd"} err="failed to get container status \"6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd\": rpc error: code = NotFound desc = could not find container \"6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd\": container with ID starting with 6e885f76b91bf385159978ebef1f77fe529ab8667d39e778ac302270b35a1cbd not found: ID does not exist" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.897580 4895 scope.go:117] "RemoveContainer" containerID="953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e" Dec 06 09:06:45 crc kubenswrapper[4895]: E1206 09:06:45.898077 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e\": container with ID starting with 953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e not found: ID does not exist" containerID="953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e" Dec 06 09:06:45 crc kubenswrapper[4895]: I1206 09:06:45.898157 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e"} err="failed to get container status \"953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e\": rpc error: code = NotFound desc = could not find container \"953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e\": container with ID starting with 953a3adf7eda96068dc966f830643a0a48dec31a748777167ba0b4451dc2fe7e not found: ID does not exist" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.087078 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.101978 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.128934 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.139241 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.159573 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.160024 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb73be7-9428-40ee-826b-f1af8a0c1838" containerName="nova-manage" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160037 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb73be7-9428-40ee-826b-f1af8a0c1838" containerName="nova-manage" Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.160050 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-metadata" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160057 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-metadata" Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.160068 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-api" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160074 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-api" Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.160085 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-log" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160091 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-log" Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.160114 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-log" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160119 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-log" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160374 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-log" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160384 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb73be7-9428-40ee-826b-f1af8a0c1838" containerName="nova-manage" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160395 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-metadata" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160406 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" containerName="nova-metadata-log" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.160422 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" containerName="nova-api-api" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.161382 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.167172 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.168050 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.179671 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.181211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.183822 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.196838 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.239694 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.241943 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-logs\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.242144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b9t8\" (UniqueName: \"kubernetes.io/projected/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-kube-api-access-4b9t8\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.242209 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-config-data\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.242243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344170 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-config-data\") pod \"559a97b3-560f-477b-b92a-e3b611a40713\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344367 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-combined-ca-bundle\") pod \"559a97b3-560f-477b-b92a-e3b611a40713\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344584 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5jzg\" (UniqueName: \"kubernetes.io/projected/559a97b3-560f-477b-b92a-e3b611a40713-kube-api-access-f5jzg\") pod \"559a97b3-560f-477b-b92a-e3b611a40713\" (UID: \"559a97b3-560f-477b-b92a-e3b611a40713\") " Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344863 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-logs\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344909 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4gm\" (UniqueName: \"kubernetes.io/projected/27a59e68-15e8-46f2-8918-c01bd353d3d3-kube-api-access-rh4gm\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344934 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27a59e68-15e8-46f2-8918-c01bd353d3d3-logs\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.344987 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b9t8\" (UniqueName: \"kubernetes.io/projected/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-kube-api-access-4b9t8\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.345009 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-config-data\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.345028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.345153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.345190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-config-data\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.346826 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-logs\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.350981 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559a97b3-560f-477b-b92a-e3b611a40713-kube-api-access-f5jzg" (OuterVolumeSpecName: "kube-api-access-f5jzg") pod "559a97b3-560f-477b-b92a-e3b611a40713" (UID: "559a97b3-560f-477b-b92a-e3b611a40713"). InnerVolumeSpecName "kube-api-access-f5jzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.351067 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-config-data\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.351408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.363586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b9t8\" (UniqueName: \"kubernetes.io/projected/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-kube-api-access-4b9t8\") pod \"nova-metadata-0\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.373200 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "559a97b3-560f-477b-b92a-e3b611a40713" (UID: "559a97b3-560f-477b-b92a-e3b611a40713"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.382243 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-config-data" (OuterVolumeSpecName: "config-data") pod "559a97b3-560f-477b-b92a-e3b611a40713" (UID: "559a97b3-560f-477b-b92a-e3b611a40713"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447235 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-config-data\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4gm\" (UniqueName: \"kubernetes.io/projected/27a59e68-15e8-46f2-8918-c01bd353d3d3-kube-api-access-rh4gm\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447356 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27a59e68-15e8-46f2-8918-c01bd353d3d3-logs\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447434 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447446 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5jzg\" (UniqueName: \"kubernetes.io/projected/559a97b3-560f-477b-b92a-e3b611a40713-kube-api-access-f5jzg\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447455 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559a97b3-560f-477b-b92a-e3b611a40713-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.447824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27a59e68-15e8-46f2-8918-c01bd353d3d3-logs\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.451213 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.452331 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-config-data\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.464016 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4gm\" (UniqueName: \"kubernetes.io/projected/27a59e68-15e8-46f2-8918-c01bd353d3d3-kube-api-access-rh4gm\") pod \"nova-api-0\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.567144 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.576034 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.725106 4895 generic.go:334] "Generic (PLEG): container finished" podID="559a97b3-560f-477b-b92a-e3b611a40713" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" exitCode=0 Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.725188 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"559a97b3-560f-477b-b92a-e3b611a40713","Type":"ContainerDied","Data":"bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b"} Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.725288 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"559a97b3-560f-477b-b92a-e3b611a40713","Type":"ContainerDied","Data":"8d36504946ca1b7c726ce2f0b3c8a46e2bc5934db0c6c10649f04dd13756b595"} Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.725315 4895 scope.go:117] "RemoveContainer" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.725310 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.762631 4895 scope.go:117] "RemoveContainer" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.763621 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b\": container with ID starting with bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b not found: ID does not exist" containerID="bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.763676 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b"} err="failed to get container status \"bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b\": rpc error: code = NotFound desc = could not find container \"bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b\": container with ID starting with bc585c163a520456f0160c683995149f2d5494de3bd33b9ae55d650841b1313b not found: ID does not exist" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.810188 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.823634 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.833180 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: E1206 09:06:46.833641 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559a97b3-560f-477b-b92a-e3b611a40713" containerName="nova-scheduler-scheduler" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.833659 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="559a97b3-560f-477b-b92a-e3b611a40713" containerName="nova-scheduler-scheduler" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.833926 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="559a97b3-560f-477b-b92a-e3b611a40713" containerName="nova-scheduler-scheduler" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.834636 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.837994 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.845719 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.898875 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.955371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2wm\" (UniqueName: \"kubernetes.io/projected/d7c353df-e876-42b4-9844-035e06b1f5a2-kube-api-access-bt2wm\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.955454 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-config-data\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:46 crc kubenswrapper[4895]: I1206 09:06:46.955626 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.057833 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-config-data\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.058333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.058383 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2wm\" (UniqueName: \"kubernetes.io/projected/d7c353df-e876-42b4-9844-035e06b1f5a2-kube-api-access-bt2wm\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.061539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-config-data\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.061652 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.074243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2wm\" (UniqueName: \"kubernetes.io/projected/d7c353df-e876-42b4-9844-035e06b1f5a2-kube-api-access-bt2wm\") pod \"nova-scheduler-0\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.172335 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.174325 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:06:47 crc kubenswrapper[4895]: W1206 09:06:47.178977 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3b4bf62_fef8_4b66_beb6_46ab772abf2e.slice/crio-96c49fb716d5a2b57a7984f9af1e732dcb6ad50aac6a12b9945ccc0a7d914084 WatchSource:0}: Error finding container 96c49fb716d5a2b57a7984f9af1e732dcb6ad50aac6a12b9945ccc0a7d914084: Status 404 returned error can't find the container with id 96c49fb716d5a2b57a7984f9af1e732dcb6ad50aac6a12b9945ccc0a7d914084 Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.628177 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:06:47 crc kubenswrapper[4895]: W1206 09:06:47.632058 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7c353df_e876_42b4_9844_035e06b1f5a2.slice/crio-a469c84eaa02790631fd9e576917c5c9a76cbf21c5a7c921c7ecc0fc6272ebcb WatchSource:0}: Error finding container a469c84eaa02790631fd9e576917c5c9a76cbf21c5a7c921c7ecc0fc6272ebcb: Status 404 returned error can't find the container with id a469c84eaa02790631fd9e576917c5c9a76cbf21c5a7c921c7ecc0fc6272ebcb Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.741873 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27a59e68-15e8-46f2-8918-c01bd353d3d3","Type":"ContainerStarted","Data":"c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.741930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27a59e68-15e8-46f2-8918-c01bd353d3d3","Type":"ContainerStarted","Data":"046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.741945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27a59e68-15e8-46f2-8918-c01bd353d3d3","Type":"ContainerStarted","Data":"376178755fec14a0f7f3e8fd6cd4d107b1e967716e324ddc95e455ea5cfc1a32"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.745077 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c353df-e876-42b4-9844-035e06b1f5a2","Type":"ContainerStarted","Data":"a469c84eaa02790631fd9e576917c5c9a76cbf21c5a7c921c7ecc0fc6272ebcb"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.752564 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3b4bf62-fef8-4b66-beb6-46ab772abf2e","Type":"ContainerStarted","Data":"204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.752778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3b4bf62-fef8-4b66-beb6-46ab772abf2e","Type":"ContainerStarted","Data":"ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.752853 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3b4bf62-fef8-4b66-beb6-46ab772abf2e","Type":"ContainerStarted","Data":"96c49fb716d5a2b57a7984f9af1e732dcb6ad50aac6a12b9945ccc0a7d914084"} Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.769922 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.7699033069999999 podStartE2EDuration="1.769903307s" podCreationTimestamp="2025-12-06 09:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:47.762088317 +0000 UTC m=+7770.163477207" watchObservedRunningTime="2025-12-06 09:06:47.769903307 +0000 UTC m=+7770.171292187" Dec 06 09:06:47 crc kubenswrapper[4895]: I1206 09:06:47.784606 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.7845816810000001 podStartE2EDuration="1.784581681s" podCreationTimestamp="2025-12-06 09:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:47.783875732 +0000 UTC m=+7770.185264612" watchObservedRunningTime="2025-12-06 09:06:47.784581681 +0000 UTC m=+7770.185970551" Dec 06 09:06:48 crc kubenswrapper[4895]: I1206 09:06:48.069417 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d246a4c-8b90-4173-87e2-03299f8f2196" path="/var/lib/kubelet/pods/1d246a4c-8b90-4173-87e2-03299f8f2196/volumes" Dec 06 09:06:48 crc kubenswrapper[4895]: I1206 09:06:48.071325 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c273580-a161-44a8-8b56-e0c4a01cadce" path="/var/lib/kubelet/pods/2c273580-a161-44a8-8b56-e0c4a01cadce/volumes" Dec 06 09:06:48 crc kubenswrapper[4895]: I1206 09:06:48.072574 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559a97b3-560f-477b-b92a-e3b611a40713" path="/var/lib/kubelet/pods/559a97b3-560f-477b-b92a-e3b611a40713/volumes" Dec 06 09:06:48 crc kubenswrapper[4895]: I1206 09:06:48.764708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c353df-e876-42b4-9844-035e06b1f5a2","Type":"ContainerStarted","Data":"50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c"} Dec 06 09:06:48 crc kubenswrapper[4895]: I1206 09:06:48.789038 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.789009137 podStartE2EDuration="2.789009137s" podCreationTimestamp="2025-12-06 09:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:06:48.782399509 +0000 UTC m=+7771.183788379" watchObservedRunningTime="2025-12-06 09:06:48.789009137 +0000 UTC m=+7771.190398017" Dec 06 09:06:51 crc kubenswrapper[4895]: I1206 09:06:51.567724 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:06:51 crc kubenswrapper[4895]: I1206 09:06:51.568020 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:06:52 crc kubenswrapper[4895]: I1206 09:06:52.174517 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 09:06:56 crc kubenswrapper[4895]: I1206 09:06:56.567981 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:06:56 crc kubenswrapper[4895]: I1206 09:06:56.568548 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:06:56 crc kubenswrapper[4895]: I1206 09:06:56.577136 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:06:56 crc kubenswrapper[4895]: I1206 09:06:56.577235 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.174662 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.210548 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.731657 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.731677 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.731742 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.731682 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:06:57 crc kubenswrapper[4895]: I1206 09:06:57.902099 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.695609 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.696059 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.696130 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.697293 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a08bae6a77367b44b45e288465887eadb17c31ec1d0ea904bd0118b481a1ef4"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.697388 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://6a08bae6a77367b44b45e288465887eadb17c31ec1d0ea904bd0118b481a1ef4" gracePeriod=600 Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.896133 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="6a08bae6a77367b44b45e288465887eadb17c31ec1d0ea904bd0118b481a1ef4" exitCode=0 Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.896178 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"6a08bae6a77367b44b45e288465887eadb17c31ec1d0ea904bd0118b481a1ef4"} Dec 06 09:06:59 crc kubenswrapper[4895]: I1206 09:06:59.896219 4895 scope.go:117] "RemoveContainer" containerID="f88e25cbe6d859821d460ed80f467a4274f4ce7c573da3fb57047cd01eda435e" Dec 06 09:07:00 crc kubenswrapper[4895]: I1206 09:07:00.918176 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef"} Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.570436 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.571113 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.573044 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.573778 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.579575 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.579898 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.583377 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.587739 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.982008 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 09:07:06 crc kubenswrapper[4895]: I1206 09:07:06.985752 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.174947 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f469c5dff-kcf5q"] Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.176601 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.201761 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f469c5dff-kcf5q"] Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.279138 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-dns-svc\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.279249 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fgv\" (UniqueName: \"kubernetes.io/projected/8ca2021c-ce9a-412f-ac4f-731953d5471e-kube-api-access-d5fgv\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.279326 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-nb\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.279400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-config\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.279534 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-sb\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.381390 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5fgv\" (UniqueName: \"kubernetes.io/projected/8ca2021c-ce9a-412f-ac4f-731953d5471e-kube-api-access-d5fgv\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.381454 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-nb\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.381528 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-config\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.381553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-sb\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.381677 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-dns-svc\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.382747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-config\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.382774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-dns-svc\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.383314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-nb\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.383576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-sb\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.406827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5fgv\" (UniqueName: \"kubernetes.io/projected/8ca2021c-ce9a-412f-ac4f-731953d5471e-kube-api-access-d5fgv\") pod \"dnsmasq-dns-f469c5dff-kcf5q\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.521986 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:07 crc kubenswrapper[4895]: I1206 09:07:07.985168 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f469c5dff-kcf5q"] Dec 06 09:07:07 crc kubenswrapper[4895]: W1206 09:07:07.991495 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ca2021c_ce9a_412f_ac4f_731953d5471e.slice/crio-23137fd0f7038eaef28709c4b661cf80e710e757b8ff3497b878c8dc87d7e918 WatchSource:0}: Error finding container 23137fd0f7038eaef28709c4b661cf80e710e757b8ff3497b878c8dc87d7e918: Status 404 returned error can't find the container with id 23137fd0f7038eaef28709c4b661cf80e710e757b8ff3497b878c8dc87d7e918 Dec 06 09:07:08 crc kubenswrapper[4895]: I1206 09:07:08.999439 4895 generic.go:334] "Generic (PLEG): container finished" podID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerID="f0197fc659da8112f057b379ef9c12798251968d08344767cf1e8e2de40e0979" exitCode=0 Dec 06 09:07:08 crc kubenswrapper[4895]: I1206 09:07:08.999557 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" event={"ID":"8ca2021c-ce9a-412f-ac4f-731953d5471e","Type":"ContainerDied","Data":"f0197fc659da8112f057b379ef9c12798251968d08344767cf1e8e2de40e0979"} Dec 06 09:07:09 crc kubenswrapper[4895]: I1206 09:07:08.999822 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" event={"ID":"8ca2021c-ce9a-412f-ac4f-731953d5471e","Type":"ContainerStarted","Data":"23137fd0f7038eaef28709c4b661cf80e710e757b8ff3497b878c8dc87d7e918"} Dec 06 09:07:10 crc kubenswrapper[4895]: I1206 09:07:10.010091 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" event={"ID":"8ca2021c-ce9a-412f-ac4f-731953d5471e","Type":"ContainerStarted","Data":"6c62f59181cf2a53a044d6420ef00f341410e2fe1fe7ce04772b9294d4d9cf3d"} Dec 06 09:07:10 crc kubenswrapper[4895]: I1206 09:07:10.010709 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:10 crc kubenswrapper[4895]: I1206 09:07:10.031896 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" podStartSLOduration=3.031858225 podStartE2EDuration="3.031858225s" podCreationTimestamp="2025-12-06 09:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:07:10.03093134 +0000 UTC m=+7792.432320220" watchObservedRunningTime="2025-12-06 09:07:10.031858225 +0000 UTC m=+7792.433247095" Dec 06 09:07:17 crc kubenswrapper[4895]: I1206 09:07:17.523686 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:07:17 crc kubenswrapper[4895]: I1206 09:07:17.613868 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd8bf4d65-wnvnc"] Dec 06 09:07:17 crc kubenswrapper[4895]: I1206 09:07:17.614408 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="dnsmasq-dns" containerID="cri-o://44e7a66b43ca519e7b7dbe47399fdbeea08ae232a1aac81419dde5d48e60c6f5" gracePeriod=10 Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.114611 4895 generic.go:334] "Generic (PLEG): container finished" podID="ea340439-debf-49d8-aec1-002d6299334c" containerID="44e7a66b43ca519e7b7dbe47399fdbeea08ae232a1aac81419dde5d48e60c6f5" exitCode=0 Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.115222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" event={"ID":"ea340439-debf-49d8-aec1-002d6299334c","Type":"ContainerDied","Data":"44e7a66b43ca519e7b7dbe47399fdbeea08ae232a1aac81419dde5d48e60c6f5"} Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.115281 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" event={"ID":"ea340439-debf-49d8-aec1-002d6299334c","Type":"ContainerDied","Data":"27627d4d699fbf446c6bbdf0ffc1bce8333e1fb66256fffc0154994c7efe74bc"} Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.115299 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27627d4d699fbf446c6bbdf0ffc1bce8333e1fb66256fffc0154994c7efe74bc" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.178358 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.338062 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-nb\") pod \"ea340439-debf-49d8-aec1-002d6299334c\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.338145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnhvb\" (UniqueName: \"kubernetes.io/projected/ea340439-debf-49d8-aec1-002d6299334c-kube-api-access-qnhvb\") pod \"ea340439-debf-49d8-aec1-002d6299334c\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.338219 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-dns-svc\") pod \"ea340439-debf-49d8-aec1-002d6299334c\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.338257 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-sb\") pod \"ea340439-debf-49d8-aec1-002d6299334c\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.338372 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-config\") pod \"ea340439-debf-49d8-aec1-002d6299334c\" (UID: \"ea340439-debf-49d8-aec1-002d6299334c\") " Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.360107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea340439-debf-49d8-aec1-002d6299334c-kube-api-access-qnhvb" (OuterVolumeSpecName: "kube-api-access-qnhvb") pod "ea340439-debf-49d8-aec1-002d6299334c" (UID: "ea340439-debf-49d8-aec1-002d6299334c"). InnerVolumeSpecName "kube-api-access-qnhvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.441162 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnhvb\" (UniqueName: \"kubernetes.io/projected/ea340439-debf-49d8-aec1-002d6299334c-kube-api-access-qnhvb\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.468228 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea340439-debf-49d8-aec1-002d6299334c" (UID: "ea340439-debf-49d8-aec1-002d6299334c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.476163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea340439-debf-49d8-aec1-002d6299334c" (UID: "ea340439-debf-49d8-aec1-002d6299334c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.502637 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-config" (OuterVolumeSpecName: "config") pod "ea340439-debf-49d8-aec1-002d6299334c" (UID: "ea340439-debf-49d8-aec1-002d6299334c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.508038 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea340439-debf-49d8-aec1-002d6299334c" (UID: "ea340439-debf-49d8-aec1-002d6299334c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.544936 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.544981 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.544992 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:18 crc kubenswrapper[4895]: I1206 09:07:18.545001 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea340439-debf-49d8-aec1-002d6299334c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:19 crc kubenswrapper[4895]: I1206 09:07:19.122597 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" Dec 06 09:07:19 crc kubenswrapper[4895]: I1206 09:07:19.153890 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd8bf4d65-wnvnc"] Dec 06 09:07:19 crc kubenswrapper[4895]: I1206 09:07:19.161923 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd8bf4d65-wnvnc"] Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.062885 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea340439-debf-49d8-aec1-002d6299334c" path="/var/lib/kubelet/pods/ea340439-debf-49d8-aec1-002d6299334c/volumes" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.587099 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kjp4n"] Dec 06 09:07:20 crc kubenswrapper[4895]: E1206 09:07:20.587580 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="dnsmasq-dns" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.587597 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="dnsmasq-dns" Dec 06 09:07:20 crc kubenswrapper[4895]: E1206 09:07:20.587612 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="init" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.587618 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="init" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.587799 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="dnsmasq-dns" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.588466 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.596508 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kjp4n"] Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.678821 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-899b-account-create-update-67t5l"] Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.683327 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aaae59-6595-4164-8f28-f2bec39e3b96-operator-scripts\") pod \"cinder-db-create-kjp4n\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.683410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4m8m\" (UniqueName: \"kubernetes.io/projected/19aaae59-6595-4164-8f28-f2bec39e3b96-kube-api-access-q4m8m\") pod \"cinder-db-create-kjp4n\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.687024 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.689164 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.694360 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-899b-account-create-update-67t5l"] Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.784539 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27528cdd-37d6-487a-9987-37d2b13a199c-operator-scripts\") pod \"cinder-899b-account-create-update-67t5l\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.784878 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aaae59-6595-4164-8f28-f2bec39e3b96-operator-scripts\") pod \"cinder-db-create-kjp4n\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.785670 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aaae59-6595-4164-8f28-f2bec39e3b96-operator-scripts\") pod \"cinder-db-create-kjp4n\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.785874 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngzvn\" (UniqueName: \"kubernetes.io/projected/27528cdd-37d6-487a-9987-37d2b13a199c-kube-api-access-ngzvn\") pod \"cinder-899b-account-create-update-67t5l\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.786060 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4m8m\" (UniqueName: \"kubernetes.io/projected/19aaae59-6595-4164-8f28-f2bec39e3b96-kube-api-access-q4m8m\") pod \"cinder-db-create-kjp4n\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.807087 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4m8m\" (UniqueName: \"kubernetes.io/projected/19aaae59-6595-4164-8f28-f2bec39e3b96-kube-api-access-q4m8m\") pod \"cinder-db-create-kjp4n\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.888207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngzvn\" (UniqueName: \"kubernetes.io/projected/27528cdd-37d6-487a-9987-37d2b13a199c-kube-api-access-ngzvn\") pod \"cinder-899b-account-create-update-67t5l\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.888343 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27528cdd-37d6-487a-9987-37d2b13a199c-operator-scripts\") pod \"cinder-899b-account-create-update-67t5l\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.889255 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27528cdd-37d6-487a-9987-37d2b13a199c-operator-scripts\") pod \"cinder-899b-account-create-update-67t5l\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.906194 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngzvn\" (UniqueName: \"kubernetes.io/projected/27528cdd-37d6-487a-9987-37d2b13a199c-kube-api-access-ngzvn\") pod \"cinder-899b-account-create-update-67t5l\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:20 crc kubenswrapper[4895]: I1206 09:07:20.916637 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:21 crc kubenswrapper[4895]: I1206 09:07:21.017240 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:21 crc kubenswrapper[4895]: I1206 09:07:21.521618 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kjp4n"] Dec 06 09:07:21 crc kubenswrapper[4895]: I1206 09:07:21.652611 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-899b-account-create-update-67t5l"] Dec 06 09:07:21 crc kubenswrapper[4895]: W1206 09:07:21.666145 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27528cdd_37d6_487a_9987_37d2b13a199c.slice/crio-1a1e1074dadee9410e470393416398c1b4b1bbc702902135a645a9a5cd8fdc75 WatchSource:0}: Error finding container 1a1e1074dadee9410e470393416398c1b4b1bbc702902135a645a9a5cd8fdc75: Status 404 returned error can't find the container with id 1a1e1074dadee9410e470393416398c1b4b1bbc702902135a645a9a5cd8fdc75 Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.171818 4895 generic.go:334] "Generic (PLEG): container finished" podID="19aaae59-6595-4164-8f28-f2bec39e3b96" containerID="375f325bc2758a9f3ccf06aec709f32be17ed32dc88720e8941e73a7af9f77b3" exitCode=0 Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.171899 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kjp4n" event={"ID":"19aaae59-6595-4164-8f28-f2bec39e3b96","Type":"ContainerDied","Data":"375f325bc2758a9f3ccf06aec709f32be17ed32dc88720e8941e73a7af9f77b3"} Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.172001 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kjp4n" event={"ID":"19aaae59-6595-4164-8f28-f2bec39e3b96","Type":"ContainerStarted","Data":"ef0e8252c668f2471c39f68f47353e4a7db6aae8d929d51df2c433545c50cb7d"} Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.174211 4895 generic.go:334] "Generic (PLEG): container finished" podID="27528cdd-37d6-487a-9987-37d2b13a199c" containerID="605f5340b95cd9633471d379d4b58f982c22a3b9b26db2afb2e8bbffb105500e" exitCode=0 Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.174280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-899b-account-create-update-67t5l" event={"ID":"27528cdd-37d6-487a-9987-37d2b13a199c","Type":"ContainerDied","Data":"605f5340b95cd9633471d379d4b58f982c22a3b9b26db2afb2e8bbffb105500e"} Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.174379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-899b-account-create-update-67t5l" event={"ID":"27528cdd-37d6-487a-9987-37d2b13a199c","Type":"ContainerStarted","Data":"1a1e1074dadee9410e470393416398c1b4b1bbc702902135a645a9a5cd8fdc75"} Dec 06 09:07:22 crc kubenswrapper[4895]: I1206 09:07:22.856809 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fd8bf4d65-wnvnc" podUID="ea340439-debf-49d8-aec1-002d6299334c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.80:5353: i/o timeout" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.716170 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.722891 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.853720 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngzvn\" (UniqueName: \"kubernetes.io/projected/27528cdd-37d6-487a-9987-37d2b13a199c-kube-api-access-ngzvn\") pod \"27528cdd-37d6-487a-9987-37d2b13a199c\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.853829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27528cdd-37d6-487a-9987-37d2b13a199c-operator-scripts\") pod \"27528cdd-37d6-487a-9987-37d2b13a199c\" (UID: \"27528cdd-37d6-487a-9987-37d2b13a199c\") " Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.854022 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aaae59-6595-4164-8f28-f2bec39e3b96-operator-scripts\") pod \"19aaae59-6595-4164-8f28-f2bec39e3b96\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.854069 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4m8m\" (UniqueName: \"kubernetes.io/projected/19aaae59-6595-4164-8f28-f2bec39e3b96-kube-api-access-q4m8m\") pod \"19aaae59-6595-4164-8f28-f2bec39e3b96\" (UID: \"19aaae59-6595-4164-8f28-f2bec39e3b96\") " Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.854669 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27528cdd-37d6-487a-9987-37d2b13a199c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27528cdd-37d6-487a-9987-37d2b13a199c" (UID: "27528cdd-37d6-487a-9987-37d2b13a199c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.855152 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19aaae59-6595-4164-8f28-f2bec39e3b96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19aaae59-6595-4164-8f28-f2bec39e3b96" (UID: "19aaae59-6595-4164-8f28-f2bec39e3b96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.860273 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19aaae59-6595-4164-8f28-f2bec39e3b96-kube-api-access-q4m8m" (OuterVolumeSpecName: "kube-api-access-q4m8m") pod "19aaae59-6595-4164-8f28-f2bec39e3b96" (UID: "19aaae59-6595-4164-8f28-f2bec39e3b96"). InnerVolumeSpecName "kube-api-access-q4m8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.860584 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27528cdd-37d6-487a-9987-37d2b13a199c-kube-api-access-ngzvn" (OuterVolumeSpecName: "kube-api-access-ngzvn") pod "27528cdd-37d6-487a-9987-37d2b13a199c" (UID: "27528cdd-37d6-487a-9987-37d2b13a199c"). InnerVolumeSpecName "kube-api-access-ngzvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.955813 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27528cdd-37d6-487a-9987-37d2b13a199c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.955843 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aaae59-6595-4164-8f28-f2bec39e3b96-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.955852 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4m8m\" (UniqueName: \"kubernetes.io/projected/19aaae59-6595-4164-8f28-f2bec39e3b96-kube-api-access-q4m8m\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:23 crc kubenswrapper[4895]: I1206 09:07:23.955865 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngzvn\" (UniqueName: \"kubernetes.io/projected/27528cdd-37d6-487a-9987-37d2b13a199c-kube-api-access-ngzvn\") on node \"crc\" DevicePath \"\"" Dec 06 09:07:24 crc kubenswrapper[4895]: I1206 09:07:24.200930 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kjp4n" Dec 06 09:07:24 crc kubenswrapper[4895]: I1206 09:07:24.200921 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kjp4n" event={"ID":"19aaae59-6595-4164-8f28-f2bec39e3b96","Type":"ContainerDied","Data":"ef0e8252c668f2471c39f68f47353e4a7db6aae8d929d51df2c433545c50cb7d"} Dec 06 09:07:24 crc kubenswrapper[4895]: I1206 09:07:24.201130 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef0e8252c668f2471c39f68f47353e4a7db6aae8d929d51df2c433545c50cb7d" Dec 06 09:07:24 crc kubenswrapper[4895]: I1206 09:07:24.203720 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-899b-account-create-update-67t5l" event={"ID":"27528cdd-37d6-487a-9987-37d2b13a199c","Type":"ContainerDied","Data":"1a1e1074dadee9410e470393416398c1b4b1bbc702902135a645a9a5cd8fdc75"} Dec 06 09:07:24 crc kubenswrapper[4895]: I1206 09:07:24.203772 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1e1074dadee9410e470393416398c1b4b1bbc702902135a645a9a5cd8fdc75" Dec 06 09:07:24 crc kubenswrapper[4895]: I1206 09:07:24.203821 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-899b-account-create-update-67t5l" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.924085 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wmtqd"] Dec 06 09:07:25 crc kubenswrapper[4895]: E1206 09:07:25.924822 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27528cdd-37d6-487a-9987-37d2b13a199c" containerName="mariadb-account-create-update" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.924840 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="27528cdd-37d6-487a-9987-37d2b13a199c" containerName="mariadb-account-create-update" Dec 06 09:07:25 crc kubenswrapper[4895]: E1206 09:07:25.924873 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19aaae59-6595-4164-8f28-f2bec39e3b96" containerName="mariadb-database-create" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.924881 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="19aaae59-6595-4164-8f28-f2bec39e3b96" containerName="mariadb-database-create" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.925150 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="27528cdd-37d6-487a-9987-37d2b13a199c" containerName="mariadb-account-create-update" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.925170 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="19aaae59-6595-4164-8f28-f2bec39e3b96" containerName="mariadb-database-create" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.926003 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.935985 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.936939 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.938643 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kq56j" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.948268 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wmtqd"] Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.997608 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-combined-ca-bundle\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.997676 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-etc-machine-id\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.997708 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-scripts\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.997752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-config-data\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.997773 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lst\" (UniqueName: \"kubernetes.io/projected/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-kube-api-access-v6lst\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:25 crc kubenswrapper[4895]: I1206 09:07:25.997793 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-db-sync-config-data\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.099625 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-etc-machine-id\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.099675 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-scripts\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.099721 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-config-data\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.099747 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lst\" (UniqueName: \"kubernetes.io/projected/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-kube-api-access-v6lst\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.099755 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-etc-machine-id\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.099768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-db-sync-config-data\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.100207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-combined-ca-bundle\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.105424 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-db-sync-config-data\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.105982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-config-data\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.106179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-scripts\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.108076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-combined-ca-bundle\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.119437 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lst\" (UniqueName: \"kubernetes.io/projected/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-kube-api-access-v6lst\") pod \"cinder-db-sync-wmtqd\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.251350 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.700960 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wmtqd"] Dec 06 09:07:26 crc kubenswrapper[4895]: W1206 09:07:26.708545 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb38c8f0e_1d97_46dd_bb5b_5468398bad0e.slice/crio-06edccf1639f9a55156d00a16b9ff8cb4322acf341ebac196b716db448490dfd WatchSource:0}: Error finding container 06edccf1639f9a55156d00a16b9ff8cb4322acf341ebac196b716db448490dfd: Status 404 returned error can't find the container with id 06edccf1639f9a55156d00a16b9ff8cb4322acf341ebac196b716db448490dfd Dec 06 09:07:26 crc kubenswrapper[4895]: I1206 09:07:26.711219 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:07:27 crc kubenswrapper[4895]: I1206 09:07:27.233268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wmtqd" event={"ID":"b38c8f0e-1d97-46dd-bb5b-5468398bad0e","Type":"ContainerStarted","Data":"06edccf1639f9a55156d00a16b9ff8cb4322acf341ebac196b716db448490dfd"} Dec 06 09:07:46 crc kubenswrapper[4895]: E1206 09:07:46.631741 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb" Dec 06 09:07:46 crc kubenswrapper[4895]: E1206 09:07:46.632543 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb" Dec 06 09:07:46 crc kubenswrapper[4895]: E1206 09:07:46.632718 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6lst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wmtqd_openstack(b38c8f0e-1d97-46dd-bb5b-5468398bad0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 09:07:46 crc kubenswrapper[4895]: E1206 09:07:46.633888 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wmtqd" podUID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" Dec 06 09:07:47 crc kubenswrapper[4895]: E1206 09:07:47.435998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/cinder-db-sync-wmtqd" podUID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" Dec 06 09:08:01 crc kubenswrapper[4895]: I1206 09:08:01.618026 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wmtqd" event={"ID":"b38c8f0e-1d97-46dd-bb5b-5468398bad0e","Type":"ContainerStarted","Data":"bf0f8147065ca93fc41e855a31bbe67377c7612cb12d605c0f53665c890bf45a"} Dec 06 09:08:01 crc kubenswrapper[4895]: I1206 09:08:01.658906 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wmtqd" podStartSLOduration=3.072159174 podStartE2EDuration="36.658864328s" podCreationTimestamp="2025-12-06 09:07:25 +0000 UTC" firstStartedPulling="2025-12-06 09:07:26.710925583 +0000 UTC m=+7809.112314453" lastFinishedPulling="2025-12-06 09:08:00.297630697 +0000 UTC m=+7842.699019607" observedRunningTime="2025-12-06 09:08:01.645335606 +0000 UTC m=+7844.046724486" watchObservedRunningTime="2025-12-06 09:08:01.658864328 +0000 UTC m=+7844.060253208" Dec 06 09:08:03 crc kubenswrapper[4895]: I1206 09:08:03.644601 4895 generic.go:334] "Generic (PLEG): container finished" podID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" containerID="bf0f8147065ca93fc41e855a31bbe67377c7612cb12d605c0f53665c890bf45a" exitCode=0 Dec 06 09:08:03 crc kubenswrapper[4895]: I1206 09:08:03.644744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wmtqd" event={"ID":"b38c8f0e-1d97-46dd-bb5b-5468398bad0e","Type":"ContainerDied","Data":"bf0f8147065ca93fc41e855a31bbe67377c7612cb12d605c0f53665c890bf45a"} Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.125672 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.193707 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-etc-machine-id\") pod \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.193825 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b38c8f0e-1d97-46dd-bb5b-5468398bad0e" (UID: "b38c8f0e-1d97-46dd-bb5b-5468398bad0e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.193905 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-scripts\") pod \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.193954 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-db-sync-config-data\") pod \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.194030 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6lst\" (UniqueName: \"kubernetes.io/projected/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-kube-api-access-v6lst\") pod \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.194140 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-config-data\") pod \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.194217 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-combined-ca-bundle\") pod \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\" (UID: \"b38c8f0e-1d97-46dd-bb5b-5468398bad0e\") " Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.194899 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.200589 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-scripts" (OuterVolumeSpecName: "scripts") pod "b38c8f0e-1d97-46dd-bb5b-5468398bad0e" (UID: "b38c8f0e-1d97-46dd-bb5b-5468398bad0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.201178 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b38c8f0e-1d97-46dd-bb5b-5468398bad0e" (UID: "b38c8f0e-1d97-46dd-bb5b-5468398bad0e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.202704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-kube-api-access-v6lst" (OuterVolumeSpecName: "kube-api-access-v6lst") pod "b38c8f0e-1d97-46dd-bb5b-5468398bad0e" (UID: "b38c8f0e-1d97-46dd-bb5b-5468398bad0e"). InnerVolumeSpecName "kube-api-access-v6lst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.233603 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b38c8f0e-1d97-46dd-bb5b-5468398bad0e" (UID: "b38c8f0e-1d97-46dd-bb5b-5468398bad0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.262921 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-config-data" (OuterVolumeSpecName: "config-data") pod "b38c8f0e-1d97-46dd-bb5b-5468398bad0e" (UID: "b38c8f0e-1d97-46dd-bb5b-5468398bad0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.296509 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.296558 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.296573 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6lst\" (UniqueName: \"kubernetes.io/projected/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-kube-api-access-v6lst\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.296589 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.296602 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38c8f0e-1d97-46dd-bb5b-5468398bad0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.675152 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wmtqd" event={"ID":"b38c8f0e-1d97-46dd-bb5b-5468398bad0e","Type":"ContainerDied","Data":"06edccf1639f9a55156d00a16b9ff8cb4322acf341ebac196b716db448490dfd"} Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.675201 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06edccf1639f9a55156d00a16b9ff8cb4322acf341ebac196b716db448490dfd" Dec 06 09:08:05 crc kubenswrapper[4895]: I1206 09:08:05.675204 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wmtqd" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.111427 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84ff7868ff-6874d"] Dec 06 09:08:06 crc kubenswrapper[4895]: E1206 09:08:06.112193 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" containerName="cinder-db-sync" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.112217 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" containerName="cinder-db-sync" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.112469 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" containerName="cinder-db-sync" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.113692 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.131856 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84ff7868ff-6874d"] Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.204356 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.208608 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.211068 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kq56j" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.211249 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.211394 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.211981 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.212559 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-dns-svc\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.212596 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwtq\" (UniqueName: \"kubernetes.io/projected/879b9282-5648-4b4d-b93c-0272225d0caa-kube-api-access-tdwtq\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.212697 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-sb\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.212933 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-config\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.212972 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-nb\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.314514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-dns-svc\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.314676 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdwtq\" (UniqueName: \"kubernetes.io/projected/879b9282-5648-4b4d-b93c-0272225d0caa-kube-api-access-tdwtq\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.314806 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-sb\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.314944 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-config\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.315023 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-nb\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.316020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-nb\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.316688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-dns-svc\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.317568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-sb\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.318187 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-config\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.332725 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdwtq\" (UniqueName: \"kubernetes.io/projected/879b9282-5648-4b4d-b93c-0272225d0caa-kube-api-access-tdwtq\") pod \"dnsmasq-dns-84ff7868ff-6874d\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.385106 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2sjl\" (UniqueName: \"kubernetes.io/projected/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-kube-api-access-r2sjl\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416226 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416263 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-scripts\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-logs\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416503 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.416938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.438179 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519231 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519292 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-scripts\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519341 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-logs\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519376 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519416 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519518 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519594 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2sjl\" (UniqueName: \"kubernetes.io/projected/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-kube-api-access-r2sjl\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.519629 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.520088 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-logs\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.525162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.526357 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.527846 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-scripts\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.528495 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.538629 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2sjl\" (UniqueName: \"kubernetes.io/projected/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-kube-api-access-r2sjl\") pod \"cinder-api-0\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.830174 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 09:08:06 crc kubenswrapper[4895]: I1206 09:08:06.963888 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84ff7868ff-6874d"] Dec 06 09:08:07 crc kubenswrapper[4895]: W1206 09:08:07.336844 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2dc123_e8e2_41b9_a296_4e41a3f02f6a.slice/crio-b129941f466ba02b7b487ac9dacedbcd17c0a0092e081cc926186d896f78ea51 WatchSource:0}: Error finding container b129941f466ba02b7b487ac9dacedbcd17c0a0092e081cc926186d896f78ea51: Status 404 returned error can't find the container with id b129941f466ba02b7b487ac9dacedbcd17c0a0092e081cc926186d896f78ea51 Dec 06 09:08:07 crc kubenswrapper[4895]: I1206 09:08:07.336903 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:07 crc kubenswrapper[4895]: I1206 09:08:07.720080 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a","Type":"ContainerStarted","Data":"b129941f466ba02b7b487ac9dacedbcd17c0a0092e081cc926186d896f78ea51"} Dec 06 09:08:07 crc kubenswrapper[4895]: I1206 09:08:07.723111 4895 generic.go:334] "Generic (PLEG): container finished" podID="879b9282-5648-4b4d-b93c-0272225d0caa" containerID="5d44e349a0455bffa90d3665da1db453746e3572c2ee3821e50ce8b4556eec14" exitCode=0 Dec 06 09:08:07 crc kubenswrapper[4895]: I1206 09:08:07.723155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" event={"ID":"879b9282-5648-4b4d-b93c-0272225d0caa","Type":"ContainerDied","Data":"5d44e349a0455bffa90d3665da1db453746e3572c2ee3821e50ce8b4556eec14"} Dec 06 09:08:07 crc kubenswrapper[4895]: I1206 09:08:07.723189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" event={"ID":"879b9282-5648-4b4d-b93c-0272225d0caa","Type":"ContainerStarted","Data":"0fdb4ff90c123fb72fd998192db8d13938b5ef13c5ab5655cfa845f5a0c45b71"} Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.745453 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" event={"ID":"879b9282-5648-4b4d-b93c-0272225d0caa","Type":"ContainerStarted","Data":"473cfd9ffe81fdb89920d89f828e46407fdf90ef17991c7ab3038fb3e0dd01a2"} Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.745943 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.749847 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a","Type":"ContainerStarted","Data":"9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195"} Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.749885 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a","Type":"ContainerStarted","Data":"47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651"} Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.750070 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.780127 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" podStartSLOduration=2.78010448 podStartE2EDuration="2.78010448s" podCreationTimestamp="2025-12-06 09:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:08.764788769 +0000 UTC m=+7851.166177639" watchObservedRunningTime="2025-12-06 09:08:08.78010448 +0000 UTC m=+7851.181493350" Dec 06 09:08:08 crc kubenswrapper[4895]: I1206 09:08:08.793387 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.793360857 podStartE2EDuration="2.793360857s" podCreationTimestamp="2025-12-06 09:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:08.787347295 +0000 UTC m=+7851.188736175" watchObservedRunningTime="2025-12-06 09:08:08.793360857 +0000 UTC m=+7851.194749727" Dec 06 09:08:16 crc kubenswrapper[4895]: I1206 09:08:16.439626 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:08:16 crc kubenswrapper[4895]: I1206 09:08:16.513501 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f469c5dff-kcf5q"] Dec 06 09:08:16 crc kubenswrapper[4895]: I1206 09:08:16.513820 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerName="dnsmasq-dns" containerID="cri-o://6c62f59181cf2a53a044d6420ef00f341410e2fe1fe7ce04772b9294d4d9cf3d" gracePeriod=10 Dec 06 09:08:16 crc kubenswrapper[4895]: I1206 09:08:16.845174 4895 generic.go:334] "Generic (PLEG): container finished" podID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerID="6c62f59181cf2a53a044d6420ef00f341410e2fe1fe7ce04772b9294d4d9cf3d" exitCode=0 Dec 06 09:08:16 crc kubenswrapper[4895]: I1206 09:08:16.845213 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" event={"ID":"8ca2021c-ce9a-412f-ac4f-731953d5471e","Type":"ContainerDied","Data":"6c62f59181cf2a53a044d6420ef00f341410e2fe1fe7ce04772b9294d4d9cf3d"} Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.075100 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.261693 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-nb\") pod \"8ca2021c-ce9a-412f-ac4f-731953d5471e\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.261756 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-config\") pod \"8ca2021c-ce9a-412f-ac4f-731953d5471e\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.261824 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-dns-svc\") pod \"8ca2021c-ce9a-412f-ac4f-731953d5471e\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.261922 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5fgv\" (UniqueName: \"kubernetes.io/projected/8ca2021c-ce9a-412f-ac4f-731953d5471e-kube-api-access-d5fgv\") pod \"8ca2021c-ce9a-412f-ac4f-731953d5471e\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.262088 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-sb\") pod \"8ca2021c-ce9a-412f-ac4f-731953d5471e\" (UID: \"8ca2021c-ce9a-412f-ac4f-731953d5471e\") " Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.276803 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca2021c-ce9a-412f-ac4f-731953d5471e-kube-api-access-d5fgv" (OuterVolumeSpecName: "kube-api-access-d5fgv") pod "8ca2021c-ce9a-412f-ac4f-731953d5471e" (UID: "8ca2021c-ce9a-412f-ac4f-731953d5471e"). InnerVolumeSpecName "kube-api-access-d5fgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.309013 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ca2021c-ce9a-412f-ac4f-731953d5471e" (UID: "8ca2021c-ce9a-412f-ac4f-731953d5471e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.319863 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ca2021c-ce9a-412f-ac4f-731953d5471e" (UID: "8ca2021c-ce9a-412f-ac4f-731953d5471e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.325688 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ca2021c-ce9a-412f-ac4f-731953d5471e" (UID: "8ca2021c-ce9a-412f-ac4f-731953d5471e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.333068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-config" (OuterVolumeSpecName: "config") pod "8ca2021c-ce9a-412f-ac4f-731953d5471e" (UID: "8ca2021c-ce9a-412f-ac4f-731953d5471e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.363985 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5fgv\" (UniqueName: \"kubernetes.io/projected/8ca2021c-ce9a-412f-ac4f-731953d5471e-kube-api-access-d5fgv\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.364021 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.364032 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.364044 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.364055 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ca2021c-ce9a-412f-ac4f-731953d5471e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.716431 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.716785 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-log" containerID="cri-o://ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.716896 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-metadata" containerID="cri-o://204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.739695 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.740019 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fad45921-7278-4840-a086-3e463498662e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.784289 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.784667 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-log" containerID="cri-o://046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.785149 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-api" containerID="cri-o://c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.812635 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.821352 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="fad45921-7278-4840-a086-3e463498662e" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.1.79:6080/vnc_lite.html\": dial tcp 10.217.1.79:6080: connect: connection refused" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.842927 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.843199 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="12645969-044a-4bbf-945f-076d512123df" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.865848 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerID="ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a" exitCode=143 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.865916 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3b4bf62-fef8-4b66-beb6-46ab772abf2e","Type":"ContainerDied","Data":"ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a"} Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.868057 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" event={"ID":"8ca2021c-ce9a-412f-ac4f-731953d5471e","Type":"ContainerDied","Data":"23137fd0f7038eaef28709c4b661cf80e710e757b8ff3497b878c8dc87d7e918"} Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.868102 4895 scope.go:117] "RemoveContainer" containerID="6c62f59181cf2a53a044d6420ef00f341410e2fe1fe7ce04772b9294d4d9cf3d" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.868122 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f469c5dff-kcf5q" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.868238 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d7c353df-e876-42b4-9844-035e06b1f5a2" containerName="nova-scheduler-scheduler" containerID="cri-o://50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c" gracePeriod=30 Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.905739 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f469c5dff-kcf5q"] Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.914646 4895 scope.go:117] "RemoveContainer" containerID="f0197fc659da8112f057b379ef9c12798251968d08344767cf1e8e2de40e0979" Dec 06 09:08:17 crc kubenswrapper[4895]: I1206 09:08:17.915330 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f469c5dff-kcf5q"] Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.063399 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" path="/var/lib/kubelet/pods/8ca2021c-ce9a-412f-ac4f-731953d5471e/volumes" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.576421 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.687181 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-combined-ca-bundle\") pod \"fad45921-7278-4840-a086-3e463498662e\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.687317 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-config-data\") pod \"fad45921-7278-4840-a086-3e463498662e\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.687377 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6kxr\" (UniqueName: \"kubernetes.io/projected/fad45921-7278-4840-a086-3e463498662e-kube-api-access-v6kxr\") pod \"fad45921-7278-4840-a086-3e463498662e\" (UID: \"fad45921-7278-4840-a086-3e463498662e\") " Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.698813 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad45921-7278-4840-a086-3e463498662e-kube-api-access-v6kxr" (OuterVolumeSpecName: "kube-api-access-v6kxr") pod "fad45921-7278-4840-a086-3e463498662e" (UID: "fad45921-7278-4840-a086-3e463498662e"). InnerVolumeSpecName "kube-api-access-v6kxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.719953 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fad45921-7278-4840-a086-3e463498662e" (UID: "fad45921-7278-4840-a086-3e463498662e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.730107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-config-data" (OuterVolumeSpecName: "config-data") pod "fad45921-7278-4840-a086-3e463498662e" (UID: "fad45921-7278-4840-a086-3e463498662e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.789020 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.789062 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad45921-7278-4840-a086-3e463498662e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.789075 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6kxr\" (UniqueName: \"kubernetes.io/projected/fad45921-7278-4840-a086-3e463498662e-kube-api-access-v6kxr\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.882825 4895 generic.go:334] "Generic (PLEG): container finished" podID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerID="046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55" exitCode=143 Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.882906 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27a59e68-15e8-46f2-8918-c01bd353d3d3","Type":"ContainerDied","Data":"046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55"} Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.886552 4895 generic.go:334] "Generic (PLEG): container finished" podID="fad45921-7278-4840-a086-3e463498662e" containerID="132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b" exitCode=0 Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.886593 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fad45921-7278-4840-a086-3e463498662e","Type":"ContainerDied","Data":"132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b"} Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.886616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fad45921-7278-4840-a086-3e463498662e","Type":"ContainerDied","Data":"7c3af5de7f4b903486d03955e8fe7e22d77f99ae5dd4bbb6acc335dc0c4c5eec"} Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.886632 4895 scope.go:117] "RemoveContainer" containerID="132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.886726 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.911958 4895 scope.go:117] "RemoveContainer" containerID="132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b" Dec 06 09:08:18 crc kubenswrapper[4895]: E1206 09:08:18.913256 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b\": container with ID starting with 132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b not found: ID does not exist" containerID="132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.913382 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b"} err="failed to get container status \"132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b\": rpc error: code = NotFound desc = could not find container \"132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b\": container with ID starting with 132f052994eaaba5fac58183547476a1efb02158f0c5553e9127fcd8d665714b not found: ID does not exist" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.925170 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.934543 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.941587 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.954850 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:08:18 crc kubenswrapper[4895]: E1206 09:08:18.955240 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad45921-7278-4840-a086-3e463498662e" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.955255 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad45921-7278-4840-a086-3e463498662e" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 09:08:18 crc kubenswrapper[4895]: E1206 09:08:18.955265 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerName="init" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.955271 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerName="init" Dec 06 09:08:18 crc kubenswrapper[4895]: E1206 09:08:18.955306 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerName="dnsmasq-dns" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.955312 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerName="dnsmasq-dns" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.955496 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad45921-7278-4840-a086-3e463498662e" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.955517 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca2021c-ce9a-412f-ac4f-731953d5471e" containerName="dnsmasq-dns" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.956139 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.957511 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 09:08:18 crc kubenswrapper[4895]: I1206 09:08:18.985663 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.095715 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvbw\" (UniqueName: \"kubernetes.io/projected/0835d72d-0e88-4dad-811c-a8a6dd197975-kube-api-access-jnvbw\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.095901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0835d72d-0e88-4dad-811c-a8a6dd197975-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.096045 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0835d72d-0e88-4dad-811c-a8a6dd197975-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.198408 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0835d72d-0e88-4dad-811c-a8a6dd197975-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.198487 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvbw\" (UniqueName: \"kubernetes.io/projected/0835d72d-0e88-4dad-811c-a8a6dd197975-kube-api-access-jnvbw\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.198567 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0835d72d-0e88-4dad-811c-a8a6dd197975-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.203457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0835d72d-0e88-4dad-811c-a8a6dd197975-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.213350 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0835d72d-0e88-4dad-811c-a8a6dd197975-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.216266 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvbw\" (UniqueName: \"kubernetes.io/projected/0835d72d-0e88-4dad-811c-a8a6dd197975-kube-api-access-jnvbw\") pod \"nova-cell1-novncproxy-0\" (UID: \"0835d72d-0e88-4dad-811c-a8a6dd197975\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.273027 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.589443 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.907056 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0835d72d-0e88-4dad-811c-a8a6dd197975","Type":"ContainerStarted","Data":"74772621bf6f89e5a0d373052a067c4c23af5b827e010db37dd6cb013dda362c"} Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.907491 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0835d72d-0e88-4dad-811c-a8a6dd197975","Type":"ContainerStarted","Data":"b0c406112dcaccc9094ec588a9ecb1a43cdb073fc00cd6bcbfa56312be7b01dd"} Dec 06 09:08:19 crc kubenswrapper[4895]: I1206 09:08:19.935553 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.935525457 podStartE2EDuration="1.935525457s" podCreationTimestamp="2025-12-06 09:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:19.928649682 +0000 UTC m=+7862.330038562" watchObservedRunningTime="2025-12-06 09:08:19.935525457 +0000 UTC m=+7862.336914337" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.062569 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad45921-7278-4840-a086-3e463498662e" path="/var/lib/kubelet/pods/fad45921-7278-4840-a086-3e463498662e/volumes" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.478656 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.642417 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sntw4\" (UniqueName: \"kubernetes.io/projected/12645969-044a-4bbf-945f-076d512123df-kube-api-access-sntw4\") pod \"12645969-044a-4bbf-945f-076d512123df\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.642526 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-combined-ca-bundle\") pod \"12645969-044a-4bbf-945f-076d512123df\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.642602 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-config-data\") pod \"12645969-044a-4bbf-945f-076d512123df\" (UID: \"12645969-044a-4bbf-945f-076d512123df\") " Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.648655 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12645969-044a-4bbf-945f-076d512123df-kube-api-access-sntw4" (OuterVolumeSpecName: "kube-api-access-sntw4") pod "12645969-044a-4bbf-945f-076d512123df" (UID: "12645969-044a-4bbf-945f-076d512123df"). InnerVolumeSpecName "kube-api-access-sntw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.677434 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-config-data" (OuterVolumeSpecName: "config-data") pod "12645969-044a-4bbf-945f-076d512123df" (UID: "12645969-044a-4bbf-945f-076d512123df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.681367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12645969-044a-4bbf-945f-076d512123df" (UID: "12645969-044a-4bbf-945f-076d512123df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.744959 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sntw4\" (UniqueName: \"kubernetes.io/projected/12645969-044a-4bbf-945f-076d512123df-kube-api-access-sntw4\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.745045 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.745066 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12645969-044a-4bbf-945f-076d512123df-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.923347 4895 generic.go:334] "Generic (PLEG): container finished" podID="12645969-044a-4bbf-945f-076d512123df" containerID="a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800" exitCode=0 Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.923428 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.923437 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"12645969-044a-4bbf-945f-076d512123df","Type":"ContainerDied","Data":"a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800"} Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.923521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"12645969-044a-4bbf-945f-076d512123df","Type":"ContainerDied","Data":"cd7f2e5f1efae9a0fe28aa928f673cbb8b1a53bb992a9925fc9587a23f74f6ae"} Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.923546 4895 scope.go:117] "RemoveContainer" containerID="a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.950367 4895 scope.go:117] "RemoveContainer" containerID="a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800" Dec 06 09:08:20 crc kubenswrapper[4895]: E1206 09:08:20.950877 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800\": container with ID starting with a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800 not found: ID does not exist" containerID="a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.950954 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800"} err="failed to get container status \"a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800\": rpc error: code = NotFound desc = could not find container \"a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800\": container with ID starting with a5d70ddb3bb899638c07d3492c14b70ae689b6b0aec041570ee438d537105800 not found: ID does not exist" Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.978459 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:08:20 crc kubenswrapper[4895]: I1206 09:08:20.978716 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="b744de33-30ad-4367-aa56-0c683d87b925" containerName="nova-cell1-conductor-conductor" containerID="cri-o://32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f" gracePeriod=30 Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.012023 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.023546 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.035914 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:08:21 crc kubenswrapper[4895]: E1206 09:08:21.036421 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12645969-044a-4bbf-945f-076d512123df" containerName="nova-cell0-conductor-conductor" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.036443 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="12645969-044a-4bbf-945f-076d512123df" containerName="nova-cell0-conductor-conductor" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.036775 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="12645969-044a-4bbf-945f-076d512123df" containerName="nova-cell0-conductor-conductor" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.037598 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.040738 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.046623 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.159624 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.159689 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjz56\" (UniqueName: \"kubernetes.io/projected/ea971646-f4b7-4a2f-bea6-baa488438ed2-kube-api-access-zjz56\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.159815 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.265498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.265557 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjz56\" (UniqueName: \"kubernetes.io/projected/ea971646-f4b7-4a2f-bea6-baa488438ed2-kube-api-access-zjz56\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.265633 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.276415 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.278735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.288394 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjz56\" (UniqueName: \"kubernetes.io/projected/ea971646-f4b7-4a2f-bea6-baa488438ed2-kube-api-access-zjz56\") pod \"nova-cell0-conductor-0\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.335989 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.380335 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.468208 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27a59e68-15e8-46f2-8918-c01bd353d3d3-logs\") pod \"27a59e68-15e8-46f2-8918-c01bd353d3d3\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.468316 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-config-data\") pod \"27a59e68-15e8-46f2-8918-c01bd353d3d3\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.468416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-combined-ca-bundle\") pod \"27a59e68-15e8-46f2-8918-c01bd353d3d3\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.468453 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh4gm\" (UniqueName: \"kubernetes.io/projected/27a59e68-15e8-46f2-8918-c01bd353d3d3-kube-api-access-rh4gm\") pod \"27a59e68-15e8-46f2-8918-c01bd353d3d3\" (UID: \"27a59e68-15e8-46f2-8918-c01bd353d3d3\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.468771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a59e68-15e8-46f2-8918-c01bd353d3d3-logs" (OuterVolumeSpecName: "logs") pod "27a59e68-15e8-46f2-8918-c01bd353d3d3" (UID: "27a59e68-15e8-46f2-8918-c01bd353d3d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.469165 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27a59e68-15e8-46f2-8918-c01bd353d3d3-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.474009 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a59e68-15e8-46f2-8918-c01bd353d3d3-kube-api-access-rh4gm" (OuterVolumeSpecName: "kube-api-access-rh4gm") pod "27a59e68-15e8-46f2-8918-c01bd353d3d3" (UID: "27a59e68-15e8-46f2-8918-c01bd353d3d3"). InnerVolumeSpecName "kube-api-access-rh4gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.503434 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27a59e68-15e8-46f2-8918-c01bd353d3d3" (UID: "27a59e68-15e8-46f2-8918-c01bd353d3d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.512986 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-config-data" (OuterVolumeSpecName: "config-data") pod "27a59e68-15e8-46f2-8918-c01bd353d3d3" (UID: "27a59e68-15e8-46f2-8918-c01bd353d3d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.524274 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.571105 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.571132 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh4gm\" (UniqueName: \"kubernetes.io/projected/27a59e68-15e8-46f2-8918-c01bd353d3d3-kube-api-access-rh4gm\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.571143 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a59e68-15e8-46f2-8918-c01bd353d3d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.675510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b9t8\" (UniqueName: \"kubernetes.io/projected/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-kube-api-access-4b9t8\") pod \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.675969 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-logs\") pod \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.676098 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-combined-ca-bundle\") pod \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.676209 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-config-data\") pod \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\" (UID: \"d3b4bf62-fef8-4b66-beb6-46ab772abf2e\") " Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.676388 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-logs" (OuterVolumeSpecName: "logs") pod "d3b4bf62-fef8-4b66-beb6-46ab772abf2e" (UID: "d3b4bf62-fef8-4b66-beb6-46ab772abf2e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.676793 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.681929 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-kube-api-access-4b9t8" (OuterVolumeSpecName: "kube-api-access-4b9t8") pod "d3b4bf62-fef8-4b66-beb6-46ab772abf2e" (UID: "d3b4bf62-fef8-4b66-beb6-46ab772abf2e"). InnerVolumeSpecName "kube-api-access-4b9t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.703962 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-config-data" (OuterVolumeSpecName: "config-data") pod "d3b4bf62-fef8-4b66-beb6-46ab772abf2e" (UID: "d3b4bf62-fef8-4b66-beb6-46ab772abf2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.721045 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3b4bf62-fef8-4b66-beb6-46ab772abf2e" (UID: "d3b4bf62-fef8-4b66-beb6-46ab772abf2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.781365 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.781414 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.781430 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b9t8\" (UniqueName: \"kubernetes.io/projected/d3b4bf62-fef8-4b66-beb6-46ab772abf2e-kube-api-access-4b9t8\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.901793 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:08:21 crc kubenswrapper[4895]: W1206 09:08:21.903619 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea971646_f4b7_4a2f_bea6_baa488438ed2.slice/crio-a0a07be87328ae162be88674956171f78106c85ac594cee8d6432502fdaa31fc WatchSource:0}: Error finding container a0a07be87328ae162be88674956171f78106c85ac594cee8d6432502fdaa31fc: Status 404 returned error can't find the container with id a0a07be87328ae162be88674956171f78106c85ac594cee8d6432502fdaa31fc Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.936317 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerID="204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec" exitCode=0 Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.936374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3b4bf62-fef8-4b66-beb6-46ab772abf2e","Type":"ContainerDied","Data":"204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec"} Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.936402 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3b4bf62-fef8-4b66-beb6-46ab772abf2e","Type":"ContainerDied","Data":"96c49fb716d5a2b57a7984f9af1e732dcb6ad50aac6a12b9945ccc0a7d914084"} Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.936420 4895 scope.go:117] "RemoveContainer" containerID="204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.936528 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.953778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea971646-f4b7-4a2f-bea6-baa488438ed2","Type":"ContainerStarted","Data":"a0a07be87328ae162be88674956171f78106c85ac594cee8d6432502fdaa31fc"} Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.958582 4895 generic.go:334] "Generic (PLEG): container finished" podID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerID="c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc" exitCode=0 Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.958629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27a59e68-15e8-46f2-8918-c01bd353d3d3","Type":"ContainerDied","Data":"c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc"} Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.958675 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27a59e68-15e8-46f2-8918-c01bd353d3d3","Type":"ContainerDied","Data":"376178755fec14a0f7f3e8fd6cd4d107b1e967716e324ddc95e455ea5cfc1a32"} Dec 06 09:08:21 crc kubenswrapper[4895]: I1206 09:08:21.958731 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.004072 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.020771 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.040973 4895 scope.go:117] "RemoveContainer" containerID="ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.096657 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12645969-044a-4bbf-945f-076d512123df" path="/var/lib/kubelet/pods/12645969-044a-4bbf-945f-076d512123df/volumes" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.100946 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" path="/var/lib/kubelet/pods/d3b4bf62-fef8-4b66-beb6-46ab772abf2e/volumes" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.102058 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.102103 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.102121 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.106117 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-api" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.106192 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-api" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.106235 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-metadata" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.106306 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-metadata" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.106322 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-log" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.106348 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-log" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.106365 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-log" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.106371 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-log" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.109250 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-log" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.109274 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-log" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.109288 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b4bf62-fef8-4b66-beb6-46ab772abf2e" containerName="nova-metadata-metadata" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.109305 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" containerName="nova-api-api" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.112360 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.112465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.115060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.117434 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.117614 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.120040 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.127880 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.130960 4895 scope.go:117] "RemoveContainer" containerID="204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.131318 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec\": container with ID starting with 204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec not found: ID does not exist" containerID="204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.131353 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec"} err="failed to get container status \"204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec\": rpc error: code = NotFound desc = could not find container \"204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec\": container with ID starting with 204389569ef5bc1ad77941b1dc930ab7e0b267048e418ac309622be59ce098ec not found: ID does not exist" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.131375 4895 scope.go:117] "RemoveContainer" containerID="ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.131843 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a\": container with ID starting with ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a not found: ID does not exist" containerID="ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.131867 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a"} err="failed to get container status \"ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a\": rpc error: code = NotFound desc = could not find container \"ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a\": container with ID starting with ca6c7d681907aeca82a28f09d6b859089ec8c0ea1e94e3b37c6c2e3cd2d8418a not found: ID does not exist" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.131883 4895 scope.go:117] "RemoveContainer" containerID="c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.164310 4895 scope.go:117] "RemoveContainer" containerID="046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.176762 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.178337 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.179680 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.179741 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d7c353df-e876-42b4-9844-035e06b1f5a2" containerName="nova-scheduler-scheduler" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.189180 4895 scope.go:117] "RemoveContainer" containerID="c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.189455 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc\": container with ID starting with c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc not found: ID does not exist" containerID="c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.189517 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc"} err="failed to get container status \"c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc\": rpc error: code = NotFound desc = could not find container \"c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc\": container with ID starting with c07a9dd3c2507e698e41f660238b36709a5f3d4b6de7c85a77b973171b264bbc not found: ID does not exist" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.189544 4895 scope.go:117] "RemoveContainer" containerID="046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55" Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.189847 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55\": container with ID starting with 046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55 not found: ID does not exist" containerID="046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.189889 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55"} err="failed to get container status \"046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55\": rpc error: code = NotFound desc = could not find container \"046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55\": container with ID starting with 046b571a70bf9e6b3b0aaca8f4cf98e3b4c117c904aa9344ca877dd010d4de55 not found: ID does not exist" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.302025 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mktnm\" (UniqueName: \"kubernetes.io/projected/e7bb05f3-943f-467a-98d8-75b754bd0de6-kube-api-access-mktnm\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.304464 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcfg\" (UniqueName: \"kubernetes.io/projected/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-kube-api-access-czcfg\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.304607 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bb05f3-943f-467a-98d8-75b754bd0de6-logs\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.304768 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.304833 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-config-data\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.304970 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.305179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-config-data\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.305353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-logs\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.407722 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-logs\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.407807 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mktnm\" (UniqueName: \"kubernetes.io/projected/e7bb05f3-943f-467a-98d8-75b754bd0de6-kube-api-access-mktnm\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.407890 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czcfg\" (UniqueName: \"kubernetes.io/projected/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-kube-api-access-czcfg\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.407935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bb05f3-943f-467a-98d8-75b754bd0de6-logs\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.407967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.407985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-config-data\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.408029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.408087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-config-data\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.408125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-logs\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.408323 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bb05f3-943f-467a-98d8-75b754bd0de6-logs\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.416236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-config-data\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.416827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.421388 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-config-data\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.428029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.428032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcfg\" (UniqueName: \"kubernetes.io/projected/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-kube-api-access-czcfg\") pod \"nova-api-0\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.430297 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mktnm\" (UniqueName: \"kubernetes.io/projected/e7bb05f3-943f-467a-98d8-75b754bd0de6-kube-api-access-mktnm\") pod \"nova-metadata-0\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.456652 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.466181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.547951 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.714176 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-combined-ca-bundle\") pod \"b744de33-30ad-4367-aa56-0c683d87b925\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.714374 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4fkf\" (UniqueName: \"kubernetes.io/projected/b744de33-30ad-4367-aa56-0c683d87b925-kube-api-access-m4fkf\") pod \"b744de33-30ad-4367-aa56-0c683d87b925\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.714413 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-config-data\") pod \"b744de33-30ad-4367-aa56-0c683d87b925\" (UID: \"b744de33-30ad-4367-aa56-0c683d87b925\") " Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.721201 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b744de33-30ad-4367-aa56-0c683d87b925-kube-api-access-m4fkf" (OuterVolumeSpecName: "kube-api-access-m4fkf") pod "b744de33-30ad-4367-aa56-0c683d87b925" (UID: "b744de33-30ad-4367-aa56-0c683d87b925"). InnerVolumeSpecName "kube-api-access-m4fkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.746697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b744de33-30ad-4367-aa56-0c683d87b925" (UID: "b744de33-30ad-4367-aa56-0c683d87b925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.752559 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-config-data" (OuterVolumeSpecName: "config-data") pod "b744de33-30ad-4367-aa56-0c683d87b925" (UID: "b744de33-30ad-4367-aa56-0c683d87b925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.816680 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.816747 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4fkf\" (UniqueName: \"kubernetes.io/projected/b744de33-30ad-4367-aa56-0c683d87b925-kube-api-access-m4fkf\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.816764 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b744de33-30ad-4367-aa56-0c683d87b925-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.870530 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rwtsw"] Dec 06 09:08:22 crc kubenswrapper[4895]: E1206 09:08:22.871249 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b744de33-30ad-4367-aa56-0c683d87b925" containerName="nova-cell1-conductor-conductor" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.871261 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b744de33-30ad-4367-aa56-0c683d87b925" containerName="nova-cell1-conductor-conductor" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.871462 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b744de33-30ad-4367-aa56-0c683d87b925" containerName="nova-cell1-conductor-conductor" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.872863 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.874839 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwtsw"] Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.972492 4895 generic.go:334] "Generic (PLEG): container finished" podID="b744de33-30ad-4367-aa56-0c683d87b925" containerID="32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f" exitCode=0 Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.972560 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.972583 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b744de33-30ad-4367-aa56-0c683d87b925","Type":"ContainerDied","Data":"32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f"} Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.972638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b744de33-30ad-4367-aa56-0c683d87b925","Type":"ContainerDied","Data":"b16e40496d3b5c4ec970cb2e5173912bd8ba77daea57a25f0fac60065b9d359e"} Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.972658 4895 scope.go:117] "RemoveContainer" containerID="32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f" Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.981366 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea971646-f4b7-4a2f-bea6-baa488438ed2","Type":"ContainerStarted","Data":"ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195"} Dec 06 09:08:22 crc kubenswrapper[4895]: I1206 09:08:22.981734 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.000809 4895 scope.go:117] "RemoveContainer" containerID="32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f" Dec 06 09:08:23 crc kubenswrapper[4895]: E1206 09:08:23.001862 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f\": container with ID starting with 32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f not found: ID does not exist" containerID="32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.001922 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f"} err="failed to get container status \"32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f\": rpc error: code = NotFound desc = could not find container \"32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f\": container with ID starting with 32b5d8e4745cdfd0ba01be060e74a4e14f1fc0f688a9fd87a1b161bf279a795f not found: ID does not exist" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.012986 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.012961866 podStartE2EDuration="2.012961866s" podCreationTimestamp="2025-12-06 09:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:23.006778439 +0000 UTC m=+7865.408167309" watchObservedRunningTime="2025-12-06 09:08:23.012961866 +0000 UTC m=+7865.414350736" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.020179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6pz\" (UniqueName: \"kubernetes.io/projected/155719e6-a3ba-4a2b-bc43-6f884d5b2908-kube-api-access-hz6pz\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.020270 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-utilities\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.020333 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-catalog-content\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: W1206 09:08:23.032397 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7bb05f3_943f_467a_98d8_75b754bd0de6.slice/crio-e3e6074499fec1857a273824b49c2fb486589a0f2e26cd4150d7ce45e1f0def0 WatchSource:0}: Error finding container e3e6074499fec1857a273824b49c2fb486589a0f2e26cd4150d7ce45e1f0def0: Status 404 returned error can't find the container with id e3e6074499fec1857a273824b49c2fb486589a0f2e26cd4150d7ce45e1f0def0 Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.032467 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.050859 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.059739 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.073099 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.074759 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.083751 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.087070 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.122439 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz6pz\" (UniqueName: \"kubernetes.io/projected/155719e6-a3ba-4a2b-bc43-6f884d5b2908-kube-api-access-hz6pz\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.122579 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-utilities\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.122671 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-catalog-content\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.125463 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-utilities\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.125583 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-catalog-content\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.128446 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.146756 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz6pz\" (UniqueName: \"kubernetes.io/projected/155719e6-a3ba-4a2b-bc43-6f884d5b2908-kube-api-access-hz6pz\") pod \"redhat-operators-rwtsw\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.197919 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.225038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.225097 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.225182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl574\" (UniqueName: \"kubernetes.io/projected/f7585ce4-8758-48b8-b730-86ae49031ba4-kube-api-access-rl574\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.327310 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.327630 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.327691 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl574\" (UniqueName: \"kubernetes.io/projected/f7585ce4-8758-48b8-b730-86ae49031ba4-kube-api-access-rl574\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.334820 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.337506 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.357348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl574\" (UniqueName: \"kubernetes.io/projected/f7585ce4-8758-48b8-b730-86ae49031ba4-kube-api-access-rl574\") pod \"nova-cell1-conductor-0\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.538986 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:23 crc kubenswrapper[4895]: I1206 09:08:23.724051 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwtsw"] Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.012271 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.018973 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerStarted","Data":"1ab11517fc5726060b3832c1b46b5681ab9488c879c01cb114cfc50ff7db6d39"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.023932 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7bb05f3-943f-467a-98d8-75b754bd0de6","Type":"ContainerStarted","Data":"95d1c98e9f4ab799948c8bc317eb7fc0385257cea45600e4476a66fd8f653b40"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.023985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7bb05f3-943f-467a-98d8-75b754bd0de6","Type":"ContainerStarted","Data":"bf99679ca790d924780c6b4fbf4c0fb9de2ebb3222a9d2dffa525283bb02f73a"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.024000 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7bb05f3-943f-467a-98d8-75b754bd0de6","Type":"ContainerStarted","Data":"e3e6074499fec1857a273824b49c2fb486589a0f2e26cd4150d7ce45e1f0def0"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.040638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6","Type":"ContainerStarted","Data":"c6452e616295cad0bf524278d0603d42e3176fa357b2252bc899778e436c854f"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.040687 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6","Type":"ContainerStarted","Data":"69c79637304ed4434bfded663f373e17e19d4805fe16eabfa0addf2068329e44"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.040699 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6","Type":"ContainerStarted","Data":"bfc2bbafe8e1b49e544414e13de59dc2b0f17bf34d9bc528161751b19cab5f6e"} Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.060540 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.06051935 podStartE2EDuration="2.06051935s" podCreationTimestamp="2025-12-06 09:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:24.058513865 +0000 UTC m=+7866.459902735" watchObservedRunningTime="2025-12-06 09:08:24.06051935 +0000 UTC m=+7866.461908220" Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.070447 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a59e68-15e8-46f2-8918-c01bd353d3d3" path="/var/lib/kubelet/pods/27a59e68-15e8-46f2-8918-c01bd353d3d3/volumes" Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.071219 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b744de33-30ad-4367-aa56-0c683d87b925" path="/var/lib/kubelet/pods/b744de33-30ad-4367-aa56-0c683d87b925/volumes" Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.093366 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.093344131 podStartE2EDuration="2.093344131s" podCreationTimestamp="2025-12-06 09:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:24.089862628 +0000 UTC m=+7866.491251498" watchObservedRunningTime="2025-12-06 09:08:24.093344131 +0000 UTC m=+7866.494733001" Dec 06 09:08:24 crc kubenswrapper[4895]: I1206 09:08:24.273462 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:25 crc kubenswrapper[4895]: I1206 09:08:25.065884 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7585ce4-8758-48b8-b730-86ae49031ba4","Type":"ContainerStarted","Data":"eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c"} Dec 06 09:08:25 crc kubenswrapper[4895]: I1206 09:08:25.066283 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7585ce4-8758-48b8-b730-86ae49031ba4","Type":"ContainerStarted","Data":"69c8f65235206afa0246cf80a2ea4b5f7af92038f330962cbe2debb34a742df9"} Dec 06 09:08:25 crc kubenswrapper[4895]: I1206 09:08:25.067981 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:25 crc kubenswrapper[4895]: I1206 09:08:25.069161 4895 generic.go:334] "Generic (PLEG): container finished" podID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerID="088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468" exitCode=0 Dec 06 09:08:25 crc kubenswrapper[4895]: I1206 09:08:25.069243 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerDied","Data":"088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468"} Dec 06 09:08:25 crc kubenswrapper[4895]: I1206 09:08:25.082451 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.082431205 podStartE2EDuration="2.082431205s" podCreationTimestamp="2025-12-06 09:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:25.081693375 +0000 UTC m=+7867.483082245" watchObservedRunningTime="2025-12-06 09:08:25.082431205 +0000 UTC m=+7867.483820095" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.080639 4895 generic.go:334] "Generic (PLEG): container finished" podID="d7c353df-e876-42b4-9844-035e06b1f5a2" containerID="50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c" exitCode=0 Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.080714 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c353df-e876-42b4-9844-035e06b1f5a2","Type":"ContainerDied","Data":"50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c"} Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.083752 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerStarted","Data":"9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2"} Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.237154 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r89gk"] Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.239378 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.251848 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r89gk"] Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.303305 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.389515 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-utilities\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.389560 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-catalog-content\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.389895 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2wc\" (UniqueName: \"kubernetes.io/projected/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-kube-api-access-wm2wc\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.491925 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-config-data\") pod \"d7c353df-e876-42b4-9844-035e06b1f5a2\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.492121 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt2wm\" (UniqueName: \"kubernetes.io/projected/d7c353df-e876-42b4-9844-035e06b1f5a2-kube-api-access-bt2wm\") pod \"d7c353df-e876-42b4-9844-035e06b1f5a2\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.492246 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-combined-ca-bundle\") pod \"d7c353df-e876-42b4-9844-035e06b1f5a2\" (UID: \"d7c353df-e876-42b4-9844-035e06b1f5a2\") " Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.492617 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-utilities\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.492652 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-catalog-content\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.492684 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2wc\" (UniqueName: \"kubernetes.io/projected/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-kube-api-access-wm2wc\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.493121 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-utilities\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.493413 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-catalog-content\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.499815 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c353df-e876-42b4-9844-035e06b1f5a2-kube-api-access-bt2wm" (OuterVolumeSpecName: "kube-api-access-bt2wm") pod "d7c353df-e876-42b4-9844-035e06b1f5a2" (UID: "d7c353df-e876-42b4-9844-035e06b1f5a2"). InnerVolumeSpecName "kube-api-access-bt2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.518510 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-config-data" (OuterVolumeSpecName: "config-data") pod "d7c353df-e876-42b4-9844-035e06b1f5a2" (UID: "d7c353df-e876-42b4-9844-035e06b1f5a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.523633 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7c353df-e876-42b4-9844-035e06b1f5a2" (UID: "d7c353df-e876-42b4-9844-035e06b1f5a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.526544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2wc\" (UniqueName: \"kubernetes.io/projected/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-kube-api-access-wm2wc\") pod \"redhat-marketplace-r89gk\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.594257 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt2wm\" (UniqueName: \"kubernetes.io/projected/d7c353df-e876-42b4-9844-035e06b1f5a2-kube-api-access-bt2wm\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.594308 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.594324 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c353df-e876-42b4-9844-035e06b1f5a2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:26 crc kubenswrapper[4895]: I1206 09:08:26.616681 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.092459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c353df-e876-42b4-9844-035e06b1f5a2","Type":"ContainerDied","Data":"a469c84eaa02790631fd9e576917c5c9a76cbf21c5a7c921c7ecc0fc6272ebcb"} Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.092832 4895 scope.go:117] "RemoveContainer" containerID="50c11dd7ec808824b3bbcd77a5776164e6e75669cd0e8afabc25d053c2fcc91c" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.092988 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.113042 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r89gk"] Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.146555 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.164643 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.178492 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:08:27 crc kubenswrapper[4895]: E1206 09:08:27.178904 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c353df-e876-42b4-9844-035e06b1f5a2" containerName="nova-scheduler-scheduler" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.178926 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c353df-e876-42b4-9844-035e06b1f5a2" containerName="nova-scheduler-scheduler" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.179155 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c353df-e876-42b4-9844-035e06b1f5a2" containerName="nova-scheduler-scheduler" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.179858 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.181962 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.189423 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.228751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.228858 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7w78\" (UniqueName: \"kubernetes.io/projected/bac7b8cb-0d41-42a2-9044-f0b986ed3503-kube-api-access-h7w78\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.230743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-config-data\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.267980 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rnfv4"] Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.269863 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.318187 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnfv4"] Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.332025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7w78\" (UniqueName: \"kubernetes.io/projected/bac7b8cb-0d41-42a2-9044-f0b986ed3503-kube-api-access-h7w78\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.332094 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtnl\" (UniqueName: \"kubernetes.io/projected/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-kube-api-access-bqtnl\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.332171 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-utilities\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.332235 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-config-data\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.332303 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.332336 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-catalog-content\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.342850 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.344431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-config-data\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.357206 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7w78\" (UniqueName: \"kubernetes.io/projected/bac7b8cb-0d41-42a2-9044-f0b986ed3503-kube-api-access-h7w78\") pod \"nova-scheduler-0\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.434303 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-catalog-content\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.434434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtnl\" (UniqueName: \"kubernetes.io/projected/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-kube-api-access-bqtnl\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.434526 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-utilities\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.435031 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-utilities\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.440685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-catalog-content\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.456762 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtnl\" (UniqueName: \"kubernetes.io/projected/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-kube-api-access-bqtnl\") pod \"community-operators-rnfv4\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.466715 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.467655 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.551910 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.728877 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:27 crc kubenswrapper[4895]: I1206 09:08:27.876359 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:08:28 crc kubenswrapper[4895]: I1206 09:08:28.064718 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c353df-e876-42b4-9844-035e06b1f5a2" path="/var/lib/kubelet/pods/d7c353df-e876-42b4-9844-035e06b1f5a2/volumes" Dec 06 09:08:28 crc kubenswrapper[4895]: I1206 09:08:28.110613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bac7b8cb-0d41-42a2-9044-f0b986ed3503","Type":"ContainerStarted","Data":"5163281b862b691e377d6f4882f859acd28f3df9d4ce7ea812682c64c5c6132e"} Dec 06 09:08:28 crc kubenswrapper[4895]: I1206 09:08:28.112420 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerStarted","Data":"4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43"} Dec 06 09:08:28 crc kubenswrapper[4895]: I1206 09:08:28.112453 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerStarted","Data":"d6b07585f91a0e03cc6ad2246835f16020be6db0f642fae11c7acbfbb034b8c4"} Dec 06 09:08:28 crc kubenswrapper[4895]: I1206 09:08:28.341260 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnfv4"] Dec 06 09:08:28 crc kubenswrapper[4895]: W1206 09:08:28.343355 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfef7c12_3ea7_4444_aa09_f33eb59c5f8c.slice/crio-3ed89a03ebb6f15159b4127aa162a30a81b0c38042698c7518f600cd4e92a69b WatchSource:0}: Error finding container 3ed89a03ebb6f15159b4127aa162a30a81b0c38042698c7518f600cd4e92a69b: Status 404 returned error can't find the container with id 3ed89a03ebb6f15159b4127aa162a30a81b0c38042698c7518f600cd4e92a69b Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.122677 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerStarted","Data":"3ed89a03ebb6f15159b4127aa162a30a81b0c38042698c7518f600cd4e92a69b"} Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.124915 4895 generic.go:334] "Generic (PLEG): container finished" podID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerID="9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2" exitCode=0 Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.124983 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerDied","Data":"9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2"} Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.126661 4895 generic.go:334] "Generic (PLEG): container finished" podID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerID="4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43" exitCode=0 Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.126704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerDied","Data":"4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43"} Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.129415 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bac7b8cb-0d41-42a2-9044-f0b986ed3503","Type":"ContainerStarted","Data":"d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e"} Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.273134 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:29 crc kubenswrapper[4895]: I1206 09:08:29.284981 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.145976 4895 generic.go:334] "Generic (PLEG): container finished" podID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerID="4080ff2b7697192bdd1bfb71118bd45b501ada78c469c61112bd24b00ae3ee46" exitCode=0 Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.146173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerDied","Data":"4080ff2b7697192bdd1bfb71118bd45b501ada78c469c61112bd24b00ae3ee46"} Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.151841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerStarted","Data":"628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6"} Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.155858 4895 generic.go:334] "Generic (PLEG): container finished" podID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerID="8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d" exitCode=0 Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.156014 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerDied","Data":"8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d"} Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.176403 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.192355 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rwtsw" podStartSLOduration=3.735186101 podStartE2EDuration="8.192333899s" podCreationTimestamp="2025-12-06 09:08:22 +0000 UTC" firstStartedPulling="2025-12-06 09:08:25.073791752 +0000 UTC m=+7867.475180622" lastFinishedPulling="2025-12-06 09:08:29.53093955 +0000 UTC m=+7871.932328420" observedRunningTime="2025-12-06 09:08:30.187452589 +0000 UTC m=+7872.588841459" watchObservedRunningTime="2025-12-06 09:08:30.192333899 +0000 UTC m=+7872.593722769" Dec 06 09:08:30 crc kubenswrapper[4895]: I1206 09:08:30.232099 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.232081568 podStartE2EDuration="3.232081568s" podCreationTimestamp="2025-12-06 09:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:30.223617269 +0000 UTC m=+7872.625006159" watchObservedRunningTime="2025-12-06 09:08:30.232081568 +0000 UTC m=+7872.633470438" Dec 06 09:08:31 crc kubenswrapper[4895]: I1206 09:08:31.171311 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerStarted","Data":"d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821"} Dec 06 09:08:31 crc kubenswrapper[4895]: I1206 09:08:31.173613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerStarted","Data":"f77e2c7ae408111616bed7f338f8209160ad449b2218777464ccb2fbe678d444"} Dec 06 09:08:31 crc kubenswrapper[4895]: I1206 09:08:31.198953 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r89gk" podStartSLOduration=3.5598902089999998 podStartE2EDuration="5.198929833s" podCreationTimestamp="2025-12-06 09:08:26 +0000 UTC" firstStartedPulling="2025-12-06 09:08:29.128773256 +0000 UTC m=+7871.530162116" lastFinishedPulling="2025-12-06 09:08:30.76781286 +0000 UTC m=+7873.169201740" observedRunningTime="2025-12-06 09:08:31.192194092 +0000 UTC m=+7873.593582972" watchObservedRunningTime="2025-12-06 09:08:31.198929833 +0000 UTC m=+7873.600318703" Dec 06 09:08:31 crc kubenswrapper[4895]: I1206 09:08:31.410813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.184927 4895 generic.go:334] "Generic (PLEG): container finished" podID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerID="f77e2c7ae408111616bed7f338f8209160ad449b2218777464ccb2fbe678d444" exitCode=0 Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.184995 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerDied","Data":"f77e2c7ae408111616bed7f338f8209160ad449b2218777464ccb2fbe678d444"} Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.464082 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.464144 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.466545 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.466578 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:08:32 crc kubenswrapper[4895]: I1206 09:08:32.553661 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.198820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerStarted","Data":"034f4c3e663df3f1221134154a645c84fde56b072e14945f8f7636eec8d71ad1"} Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.200343 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.200406 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.225451 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rnfv4" podStartSLOduration=3.602079308 podStartE2EDuration="6.225433398s" podCreationTimestamp="2025-12-06 09:08:27 +0000 UTC" firstStartedPulling="2025-12-06 09:08:30.148178503 +0000 UTC m=+7872.549567383" lastFinishedPulling="2025-12-06 09:08:32.771532603 +0000 UTC m=+7875.172921473" observedRunningTime="2025-12-06 09:08:33.219157459 +0000 UTC m=+7875.620546329" watchObservedRunningTime="2025-12-06 09:08:33.225433398 +0000 UTC m=+7875.626822258" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.571767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.632883 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.99:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.632917 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.633290 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:08:33 crc kubenswrapper[4895]: I1206 09:08:33.633329 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.99:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:08:34 crc kubenswrapper[4895]: I1206 09:08:34.259407 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rwtsw" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="registry-server" probeResult="failure" output=< Dec 06 09:08:34 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:08:34 crc kubenswrapper[4895]: > Dec 06 09:08:36 crc kubenswrapper[4895]: I1206 09:08:36.653362 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:36 crc kubenswrapper[4895]: I1206 09:08:36.654066 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:36 crc kubenswrapper[4895]: I1206 09:08:36.719297 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:37 crc kubenswrapper[4895]: I1206 09:08:37.302439 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:37 crc kubenswrapper[4895]: I1206 09:08:37.552872 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 09:08:37 crc kubenswrapper[4895]: I1206 09:08:37.580919 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 09:08:37 crc kubenswrapper[4895]: I1206 09:08:37.731791 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:37 crc kubenswrapper[4895]: I1206 09:08:37.731849 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:37 crc kubenswrapper[4895]: I1206 09:08:37.774437 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:38 crc kubenswrapper[4895]: I1206 09:08:38.295338 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 09:08:38 crc kubenswrapper[4895]: I1206 09:08:38.305612 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:38 crc kubenswrapper[4895]: I1206 09:08:38.635003 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r89gk"] Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.257929 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r89gk" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="registry-server" containerID="cri-o://d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821" gracePeriod=2 Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.713072 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.756032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-utilities\") pod \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.756189 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-catalog-content\") pod \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.757135 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-utilities" (OuterVolumeSpecName: "utilities") pod "4fec40b4-11ed-408b-9c64-fe7ec5ad2390" (UID: "4fec40b4-11ed-408b-9c64-fe7ec5ad2390"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.766676 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm2wc\" (UniqueName: \"kubernetes.io/projected/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-kube-api-access-wm2wc\") pod \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\" (UID: \"4fec40b4-11ed-408b-9c64-fe7ec5ad2390\") " Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.767399 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.776230 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fec40b4-11ed-408b-9c64-fe7ec5ad2390" (UID: "4fec40b4-11ed-408b-9c64-fe7ec5ad2390"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.777749 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-kube-api-access-wm2wc" (OuterVolumeSpecName: "kube-api-access-wm2wc") pod "4fec40b4-11ed-408b-9c64-fe7ec5ad2390" (UID: "4fec40b4-11ed-408b-9c64-fe7ec5ad2390"). InnerVolumeSpecName "kube-api-access-wm2wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.868731 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:39 crc kubenswrapper[4895]: I1206 09:08:39.868770 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm2wc\" (UniqueName: \"kubernetes.io/projected/4fec40b4-11ed-408b-9c64-fe7ec5ad2390-kube-api-access-wm2wc\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.286563 4895 generic.go:334] "Generic (PLEG): container finished" podID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerID="d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821" exitCode=0 Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.286595 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r89gk" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.286613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerDied","Data":"d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821"} Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.286764 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r89gk" event={"ID":"4fec40b4-11ed-408b-9c64-fe7ec5ad2390","Type":"ContainerDied","Data":"d6b07585f91a0e03cc6ad2246835f16020be6db0f642fae11c7acbfbb034b8c4"} Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.286782 4895 scope.go:117] "RemoveContainer" containerID="d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.313996 4895 scope.go:117] "RemoveContainer" containerID="8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.315460 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r89gk"] Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.325725 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r89gk"] Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.351888 4895 scope.go:117] "RemoveContainer" containerID="4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.387602 4895 scope.go:117] "RemoveContainer" containerID="d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821" Dec 06 09:08:40 crc kubenswrapper[4895]: E1206 09:08:40.388326 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821\": container with ID starting with d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821 not found: ID does not exist" containerID="d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.388454 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821"} err="failed to get container status \"d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821\": rpc error: code = NotFound desc = could not find container \"d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821\": container with ID starting with d1e71b081a16055399c28766db0e27ea3deea62c9d849fe5efe865b32c84f821 not found: ID does not exist" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.388603 4895 scope.go:117] "RemoveContainer" containerID="8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d" Dec 06 09:08:40 crc kubenswrapper[4895]: E1206 09:08:40.389054 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d\": container with ID starting with 8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d not found: ID does not exist" containerID="8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.389095 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d"} err="failed to get container status \"8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d\": rpc error: code = NotFound desc = could not find container \"8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d\": container with ID starting with 8d23247a22d8c047c4a4c2ac84ec217c7523de8ffb8d884099ea7cb7b418904d not found: ID does not exist" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.389123 4895 scope.go:117] "RemoveContainer" containerID="4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43" Dec 06 09:08:40 crc kubenswrapper[4895]: E1206 09:08:40.389515 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43\": container with ID starting with 4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43 not found: ID does not exist" containerID="4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43" Dec 06 09:08:40 crc kubenswrapper[4895]: I1206 09:08:40.389629 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43"} err="failed to get container status \"4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43\": rpc error: code = NotFound desc = could not find container \"4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43\": container with ID starting with 4b81c447ca4d0c87dffa55698dc0ba41975be6e87507da487ee41b27b51a5b43 not found: ID does not exist" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.065994 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" path="/var/lib/kubelet/pods/4fec40b4-11ed-408b-9c64-fe7ec5ad2390/volumes" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.528914 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.530306 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.533756 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.536778 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.543842 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.579236 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.579548 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.861148 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:42 crc kubenswrapper[4895]: E1206 09:08:42.861562 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="extract-utilities" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.861578 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="extract-utilities" Dec 06 09:08:42 crc kubenswrapper[4895]: E1206 09:08:42.861610 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="extract-content" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.861616 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="extract-content" Dec 06 09:08:42 crc kubenswrapper[4895]: E1206 09:08:42.861628 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="registry-server" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.861634 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="registry-server" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.861800 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fec40b4-11ed-408b-9c64-fe7ec5ad2390" containerName="registry-server" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.862703 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.864761 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.876733 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.925605 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.925801 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.925970 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.926231 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.926266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:42 crc kubenswrapper[4895]: I1206 09:08:42.926329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfv6w\" (UniqueName: \"kubernetes.io/projected/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-kube-api-access-qfv6w\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.027231 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnfv4"] Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.027515 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rnfv4" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="registry-server" containerID="cri-o://034f4c3e663df3f1221134154a645c84fde56b072e14945f8f7636eec8d71ad1" gracePeriod=2 Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.028272 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.028356 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.028454 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.028579 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.028629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfv6w\" (UniqueName: \"kubernetes.io/projected/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-kube-api-access-qfv6w\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.028700 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.029878 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.034975 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.035394 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.035696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.037391 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.053461 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfv6w\" (UniqueName: \"kubernetes.io/projected/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-kube-api-access-qfv6w\") pod \"cinder-scheduler-0\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.188560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.331758 4895 generic.go:334] "Generic (PLEG): container finished" podID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerID="034f4c3e663df3f1221134154a645c84fde56b072e14945f8f7636eec8d71ad1" exitCode=0 Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.331944 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerDied","Data":"034f4c3e663df3f1221134154a645c84fde56b072e14945f8f7636eec8d71ad1"} Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.332593 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.338745 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.342113 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.511642 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.637872 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqtnl\" (UniqueName: \"kubernetes.io/projected/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-kube-api-access-bqtnl\") pod \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.638829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-catalog-content\") pod \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.638976 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-utilities\") pod \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\" (UID: \"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c\") " Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.639885 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-utilities" (OuterVolumeSpecName: "utilities") pod "dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" (UID: "dfef7c12-3ea7-4444-aa09-f33eb59c5f8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.642819 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-kube-api-access-bqtnl" (OuterVolumeSpecName: "kube-api-access-bqtnl") pod "dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" (UID: "dfef7c12-3ea7-4444-aa09-f33eb59c5f8c"). InnerVolumeSpecName "kube-api-access-bqtnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.689659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" (UID: "dfef7c12-3ea7-4444-aa09-f33eb59c5f8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:43 crc kubenswrapper[4895]: W1206 09:08:43.708961 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb00d9ff_7085_4d77_9ba8_f75d234c48ca.slice/crio-89827e843998032048f9758b4255d09349b589efad8603be38479e9ffb5bde0b WatchSource:0}: Error finding container 89827e843998032048f9758b4255d09349b589efad8603be38479e9ffb5bde0b: Status 404 returned error can't find the container with id 89827e843998032048f9758b4255d09349b589efad8603be38479e9ffb5bde0b Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.710638 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.740874 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqtnl\" (UniqueName: \"kubernetes.io/projected/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-kube-api-access-bqtnl\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.740910 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:43 crc kubenswrapper[4895]: I1206 09:08:43.740921 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.262310 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rwtsw" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="registry-server" probeResult="failure" output=< Dec 06 09:08:44 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:08:44 crc kubenswrapper[4895]: > Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.343331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb00d9ff-7085-4d77-9ba8-f75d234c48ca","Type":"ContainerStarted","Data":"89827e843998032048f9758b4255d09349b589efad8603be38479e9ffb5bde0b"} Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.356188 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnfv4" event={"ID":"dfef7c12-3ea7-4444-aa09-f33eb59c5f8c","Type":"ContainerDied","Data":"3ed89a03ebb6f15159b4127aa162a30a81b0c38042698c7518f600cd4e92a69b"} Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.356259 4895 scope.go:117] "RemoveContainer" containerID="034f4c3e663df3f1221134154a645c84fde56b072e14945f8f7636eec8d71ad1" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.356465 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnfv4" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.364340 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.364585 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api-log" containerID="cri-o://47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651" gracePeriod=30 Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.364707 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api" containerID="cri-o://9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195" gracePeriod=30 Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.392376 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnfv4"] Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.393729 4895 scope.go:117] "RemoveContainer" containerID="f77e2c7ae408111616bed7f338f8209160ad449b2218777464ccb2fbe678d444" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.413085 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rnfv4"] Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.426056 4895 scope.go:117] "RemoveContainer" containerID="4080ff2b7697192bdd1bfb71118bd45b501ada78c469c61112bd24b00ae3ee46" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.894713 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 06 09:08:44 crc kubenswrapper[4895]: E1206 09:08:44.895243 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="extract-content" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.895259 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="extract-content" Dec 06 09:08:44 crc kubenswrapper[4895]: E1206 09:08:44.895316 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="registry-server" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.895323 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="registry-server" Dec 06 09:08:44 crc kubenswrapper[4895]: E1206 09:08:44.895333 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="extract-utilities" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.895340 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="extract-utilities" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.895748 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" containerName="registry-server" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.897183 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.905038 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.953052 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.966661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968540 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968587 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mszb\" (UniqueName: \"kubernetes.io/projected/d209ea91-858a-4a98-8d51-743b79811346-kube-api-access-7mszb\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968624 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d209ea91-858a-4a98-8d51-743b79811346-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968651 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968873 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.968990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969048 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969065 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969087 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969227 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969250 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-run\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:44 crc kubenswrapper[4895]: I1206 09:08:44.969276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.072875 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.072990 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073075 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073196 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-run\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073219 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073249 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073282 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mszb\" (UniqueName: \"kubernetes.io/projected/d209ea91-858a-4a98-8d51-743b79811346-kube-api-access-7mszb\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073319 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d209ea91-858a-4a98-8d51-743b79811346-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073339 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073369 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073388 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073483 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073521 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.073540 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.080890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.081681 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.082160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.082387 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.082446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-run\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.082871 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.085031 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.085078 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d209ea91-858a-4a98-8d51-743b79811346-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.085360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.085905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d209ea91-858a-4a98-8d51-743b79811346-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.090324 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.090428 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d209ea91-858a-4a98-8d51-743b79811346-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.111063 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mszb\" (UniqueName: \"kubernetes.io/projected/d209ea91-858a-4a98-8d51-743b79811346-kube-api-access-7mszb\") pod \"cinder-volume-volume1-0\" (UID: \"d209ea91-858a-4a98-8d51-743b79811346\") " pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.281534 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.368975 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb00d9ff-7085-4d77-9ba8-f75d234c48ca","Type":"ContainerStarted","Data":"d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7"} Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.369377 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb00d9ff-7085-4d77-9ba8-f75d234c48ca","Type":"ContainerStarted","Data":"df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400"} Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.372240 4895 generic.go:334] "Generic (PLEG): container finished" podID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerID="47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651" exitCode=143 Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.372331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a","Type":"ContainerDied","Data":"47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651"} Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.390839 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.102004938 podStartE2EDuration="3.390817757s" podCreationTimestamp="2025-12-06 09:08:42 +0000 UTC" firstStartedPulling="2025-12-06 09:08:43.711380928 +0000 UTC m=+7886.112769798" lastFinishedPulling="2025-12-06 09:08:44.000193717 +0000 UTC m=+7886.401582617" observedRunningTime="2025-12-06 09:08:45.383816259 +0000 UTC m=+7887.785205149" watchObservedRunningTime="2025-12-06 09:08:45.390817757 +0000 UTC m=+7887.792206628" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.565953 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.568036 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.570150 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.588268 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.683896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.683947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-config-data\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.683976 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-sys\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.683999 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-dev\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684191 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684239 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-ceph\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684280 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684323 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-run\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zt2\" (UniqueName: \"kubernetes.io/projected/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-kube-api-access-49zt2\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684484 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-scripts\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684606 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684820 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.684915 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-lib-modules\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.685084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.685207 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786583 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-ceph\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786635 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786690 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-run\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786715 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-run\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786874 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zt2\" (UniqueName: \"kubernetes.io/projected/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-kube-api-access-49zt2\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786898 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-scripts\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.786928 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.787786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.787820 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.787846 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.787874 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-lib-modules\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.787846 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-lib-modules\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.787941 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788126 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788221 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788318 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-config-data\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-sys\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788382 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-dev\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788510 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-dev\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788540 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-sys\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.788571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.792787 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.793371 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-config-data\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.793879 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.799229 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-ceph\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.803128 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-scripts\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.808659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zt2\" (UniqueName: \"kubernetes.io/projected/4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786-kube-api-access-49zt2\") pod \"cinder-backup-0\" (UID: \"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786\") " pod="openstack/cinder-backup-0" Dec 06 09:08:45 crc kubenswrapper[4895]: W1206 09:08:45.888311 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd209ea91_858a_4a98_8d51_743b79811346.slice/crio-5c414d8109b673b8e6ac09aece43ae77a065c6c406a13076d868ce77b865a9eb WatchSource:0}: Error finding container 5c414d8109b673b8e6ac09aece43ae77a065c6c406a13076d868ce77b865a9eb: Status 404 returned error can't find the container with id 5c414d8109b673b8e6ac09aece43ae77a065c6c406a13076d868ce77b865a9eb Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.888743 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 06 09:08:45 crc kubenswrapper[4895]: I1206 09:08:45.893944 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 06 09:08:46 crc kubenswrapper[4895]: I1206 09:08:46.077137 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfef7c12-3ea7-4444-aa09-f33eb59c5f8c" path="/var/lib/kubelet/pods/dfef7c12-3ea7-4444-aa09-f33eb59c5f8c/volumes" Dec 06 09:08:46 crc kubenswrapper[4895]: I1206 09:08:46.383109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d209ea91-858a-4a98-8d51-743b79811346","Type":"ContainerStarted","Data":"5c414d8109b673b8e6ac09aece43ae77a065c6c406a13076d868ce77b865a9eb"} Dec 06 09:08:46 crc kubenswrapper[4895]: I1206 09:08:46.472803 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 06 09:08:46 crc kubenswrapper[4895]: W1206 09:08:46.476643 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e7ecb7a_ac0e_4762_abb8_9da9d6bcf786.slice/crio-c0c5525e565435ffc1f51babd1ca6879923c76f7f4f97f8abd4a9f5cc838445f WatchSource:0}: Error finding container c0c5525e565435ffc1f51babd1ca6879923c76f7f4f97f8abd4a9f5cc838445f: Status 404 returned error can't find the container with id c0c5525e565435ffc1f51babd1ca6879923c76f7f4f97f8abd4a9f5cc838445f Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.407532 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d209ea91-858a-4a98-8d51-743b79811346","Type":"ContainerStarted","Data":"3b0c4ce4b432118aec05d7017d17ef661f09934e6311845dec3c4594d2e5b227"} Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.408227 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d209ea91-858a-4a98-8d51-743b79811346","Type":"ContainerStarted","Data":"c4eceaaf49dc22db287c32bc71674c339f79732563cdb877562fa88b5656474d"} Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.411541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786","Type":"ContainerStarted","Data":"eb11d9d8770e772a87e839c60fbfdcb6754ceca78b15e69e0da4a53e17f17344"} Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.411582 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786","Type":"ContainerStarted","Data":"569b68492e6f416655996dd743e2a9cfb539ad227b22aa589c90f0ab2e48c51a"} Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.411596 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786","Type":"ContainerStarted","Data":"c0c5525e565435ffc1f51babd1ca6879923c76f7f4f97f8abd4a9f5cc838445f"} Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.445126 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.066828846 podStartE2EDuration="3.445107259s" podCreationTimestamp="2025-12-06 09:08:44 +0000 UTC" firstStartedPulling="2025-12-06 09:08:45.890651266 +0000 UTC m=+7888.292040136" lastFinishedPulling="2025-12-06 09:08:46.268929679 +0000 UTC m=+7888.670318549" observedRunningTime="2025-12-06 09:08:47.44105703 +0000 UTC m=+7889.842445910" watchObservedRunningTime="2025-12-06 09:08:47.445107259 +0000 UTC m=+7889.846496129" Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.472585 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.143507756 podStartE2EDuration="2.472559097s" podCreationTimestamp="2025-12-06 09:08:45 +0000 UTC" firstStartedPulling="2025-12-06 09:08:46.479103305 +0000 UTC m=+7888.880492175" lastFinishedPulling="2025-12-06 09:08:46.808154646 +0000 UTC m=+7889.209543516" observedRunningTime="2025-12-06 09:08:47.462794484 +0000 UTC m=+7889.864183354" watchObservedRunningTime="2025-12-06 09:08:47.472559097 +0000 UTC m=+7889.873947997" Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.516394 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.95:8776/healthcheck\": read tcp 10.217.0.2:34184->10.217.1.95:8776: read: connection reset by peer" Dec 06 09:08:47 crc kubenswrapper[4895]: I1206 09:08:47.962958 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031608 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031699 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-logs\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031736 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-combined-ca-bundle\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031780 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2sjl\" (UniqueName: \"kubernetes.io/projected/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-kube-api-access-r2sjl\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031893 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data-custom\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031956 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-etc-machine-id\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.031985 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-scripts\") pod \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\" (UID: \"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a\") " Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.034776 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.035271 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-logs" (OuterVolumeSpecName: "logs") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.042575 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.042766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-kube-api-access-r2sjl" (OuterVolumeSpecName: "kube-api-access-r2sjl") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "kube-api-access-r2sjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.047358 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-scripts" (OuterVolumeSpecName: "scripts") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.091994 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.106629 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data" (OuterVolumeSpecName: "config-data") pod "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" (UID: "5f2dc123-e8e2-41b9-a296-4e41a3f02f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142101 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142153 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142171 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142187 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142204 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142220 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2sjl\" (UniqueName: \"kubernetes.io/projected/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-kube-api-access-r2sjl\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.142239 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.188796 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.423173 4895 generic.go:334] "Generic (PLEG): container finished" podID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerID="9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195" exitCode=0 Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.423212 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a","Type":"ContainerDied","Data":"9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195"} Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.423708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f2dc123-e8e2-41b9-a296-4e41a3f02f6a","Type":"ContainerDied","Data":"b129941f466ba02b7b487ac9dacedbcd17c0a0092e081cc926186d896f78ea51"} Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.423746 4895 scope.go:117] "RemoveContainer" containerID="9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.423316 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.462702 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.463778 4895 scope.go:117] "RemoveContainer" containerID="47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.479614 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.488606 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:48 crc kubenswrapper[4895]: E1206 09:08:48.489117 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.489139 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api" Dec 06 09:08:48 crc kubenswrapper[4895]: E1206 09:08:48.489157 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api-log" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.489166 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api-log" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.489416 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api-log" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.489446 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" containerName="cinder-api" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.490799 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.494200 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.507199 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.513898 4895 scope.go:117] "RemoveContainer" containerID="9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195" Dec 06 09:08:48 crc kubenswrapper[4895]: E1206 09:08:48.514372 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195\": container with ID starting with 9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195 not found: ID does not exist" containerID="9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.514406 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195"} err="failed to get container status \"9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195\": rpc error: code = NotFound desc = could not find container \"9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195\": container with ID starting with 9a4f6b91f465937b8351ce5908814fa56fe8ab7e65c8e82ad1918378ae816195 not found: ID does not exist" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.514460 4895 scope.go:117] "RemoveContainer" containerID="47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651" Dec 06 09:08:48 crc kubenswrapper[4895]: E1206 09:08:48.514715 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651\": container with ID starting with 47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651 not found: ID does not exist" containerID="47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.514738 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651"} err="failed to get container status \"47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651\": rpc error: code = NotFound desc = could not find container \"47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651\": container with ID starting with 47645399dfa4ed6c177ec5043c5f56add65db8bf274a90d72fedd1347c52f651 not found: ID does not exist" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552454 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6hw6\" (UniqueName: \"kubernetes.io/projected/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-kube-api-access-d6hw6\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552553 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552585 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-logs\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552643 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-scripts\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552722 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.552787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-config-data\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655563 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-logs\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655674 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-scripts\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655773 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-config-data\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655917 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6hw6\" (UniqueName: \"kubernetes.io/projected/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-kube-api-access-d6hw6\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655960 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-logs\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.655989 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.656022 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.662879 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.665660 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-scripts\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.666793 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.678506 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-config-data\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.682884 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6hw6\" (UniqueName: \"kubernetes.io/projected/8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3-kube-api-access-d6hw6\") pod \"cinder-api-0\" (UID: \"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3\") " pod="openstack/cinder-api-0" Dec 06 09:08:48 crc kubenswrapper[4895]: I1206 09:08:48.822055 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 09:08:49 crc kubenswrapper[4895]: I1206 09:08:49.293668 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 09:08:49 crc kubenswrapper[4895]: I1206 09:08:49.435910 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3","Type":"ContainerStarted","Data":"f71b99f0973921f45b9d7cf2015c1c09a8e674347c23ba9e0fe43cbb37b70641"} Dec 06 09:08:50 crc kubenswrapper[4895]: I1206 09:08:50.073166 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2dc123-e8e2-41b9-a296-4e41a3f02f6a" path="/var/lib/kubelet/pods/5f2dc123-e8e2-41b9-a296-4e41a3f02f6a/volumes" Dec 06 09:08:50 crc kubenswrapper[4895]: I1206 09:08:50.282604 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:50 crc kubenswrapper[4895]: I1206 09:08:50.448279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3","Type":"ContainerStarted","Data":"abdbaf97e2ba80fa7eed0875c209402c0af7fa331d3e1145d2b9e3bdf0e0ea50"} Dec 06 09:08:50 crc kubenswrapper[4895]: I1206 09:08:50.894893 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 06 09:08:51 crc kubenswrapper[4895]: I1206 09:08:51.471758 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3","Type":"ContainerStarted","Data":"046366f0740e4bee98ca4225c8ba1424abd6f72eb04aa48f4f66ba42d20003c4"} Dec 06 09:08:51 crc kubenswrapper[4895]: I1206 09:08:51.472785 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 09:08:51 crc kubenswrapper[4895]: I1206 09:08:51.507154 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.507099039 podStartE2EDuration="3.507099039s" podCreationTimestamp="2025-12-06 09:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:51.500315828 +0000 UTC m=+7893.901704708" watchObservedRunningTime="2025-12-06 09:08:51.507099039 +0000 UTC m=+7893.908487949" Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.247039 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.298308 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.414775 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.456024 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.480287 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwtsw"] Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.494516 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="cinder-scheduler" containerID="cri-o://df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400" gracePeriod=30 Dec 06 09:08:53 crc kubenswrapper[4895]: I1206 09:08:53.494581 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="probe" containerID="cri-o://d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7" gracePeriod=30 Dec 06 09:08:54 crc kubenswrapper[4895]: I1206 09:08:54.509272 4895 generic.go:334] "Generic (PLEG): container finished" podID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerID="d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7" exitCode=0 Dec 06 09:08:54 crc kubenswrapper[4895]: I1206 09:08:54.509337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb00d9ff-7085-4d77-9ba8-f75d234c48ca","Type":"ContainerDied","Data":"d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7"} Dec 06 09:08:54 crc kubenswrapper[4895]: I1206 09:08:54.509987 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rwtsw" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="registry-server" containerID="cri-o://628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6" gracePeriod=2 Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.085655 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.184231 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-catalog-content\") pod \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.184386 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz6pz\" (UniqueName: \"kubernetes.io/projected/155719e6-a3ba-4a2b-bc43-6f884d5b2908-kube-api-access-hz6pz\") pod \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.184547 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-utilities\") pod \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\" (UID: \"155719e6-a3ba-4a2b-bc43-6f884d5b2908\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.185703 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-utilities" (OuterVolumeSpecName: "utilities") pod "155719e6-a3ba-4a2b-bc43-6f884d5b2908" (UID: "155719e6-a3ba-4a2b-bc43-6f884d5b2908"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.192071 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155719e6-a3ba-4a2b-bc43-6f884d5b2908-kube-api-access-hz6pz" (OuterVolumeSpecName: "kube-api-access-hz6pz") pod "155719e6-a3ba-4a2b-bc43-6f884d5b2908" (UID: "155719e6-a3ba-4a2b-bc43-6f884d5b2908"). InnerVolumeSpecName "kube-api-access-hz6pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.286437 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz6pz\" (UniqueName: \"kubernetes.io/projected/155719e6-a3ba-4a2b-bc43-6f884d5b2908-kube-api-access-hz6pz\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.286463 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.291555 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "155719e6-a3ba-4a2b-bc43-6f884d5b2908" (UID: "155719e6-a3ba-4a2b-bc43-6f884d5b2908"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.294199 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.399127 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155719e6-a3ba-4a2b-bc43-6f884d5b2908-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.500328 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data-custom\") pod \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.501282 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfv6w\" (UniqueName: \"kubernetes.io/projected/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-kube-api-access-qfv6w\") pod \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.501313 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-combined-ca-bundle\") pod \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.501357 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-scripts\") pod \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.501395 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data\") pod \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.501446 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-etc-machine-id\") pod \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\" (UID: \"cb00d9ff-7085-4d77-9ba8-f75d234c48ca\") " Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.501841 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cb00d9ff-7085-4d77-9ba8-f75d234c48ca" (UID: "cb00d9ff-7085-4d77-9ba8-f75d234c48ca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.504660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-scripts" (OuterVolumeSpecName: "scripts") pod "cb00d9ff-7085-4d77-9ba8-f75d234c48ca" (UID: "cb00d9ff-7085-4d77-9ba8-f75d234c48ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.506871 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cb00d9ff-7085-4d77-9ba8-f75d234c48ca" (UID: "cb00d9ff-7085-4d77-9ba8-f75d234c48ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.507021 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-kube-api-access-qfv6w" (OuterVolumeSpecName: "kube-api-access-qfv6w") pod "cb00d9ff-7085-4d77-9ba8-f75d234c48ca" (UID: "cb00d9ff-7085-4d77-9ba8-f75d234c48ca"). InnerVolumeSpecName "kube-api-access-qfv6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.507208 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.523487 4895 generic.go:334] "Generic (PLEG): container finished" podID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerID="628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6" exitCode=0 Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.524008 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwtsw" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.525105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerDied","Data":"628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6"} Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.525284 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwtsw" event={"ID":"155719e6-a3ba-4a2b-bc43-6f884d5b2908","Type":"ContainerDied","Data":"1ab11517fc5726060b3832c1b46b5681ab9488c879c01cb114cfc50ff7db6d39"} Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.525434 4895 scope.go:117] "RemoveContainer" containerID="628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.525828 4895 generic.go:334] "Generic (PLEG): container finished" podID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerID="df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400" exitCode=0 Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.525890 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb00d9ff-7085-4d77-9ba8-f75d234c48ca","Type":"ContainerDied","Data":"df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400"} Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.525961 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb00d9ff-7085-4d77-9ba8-f75d234c48ca","Type":"ContainerDied","Data":"89827e843998032048f9758b4255d09349b589efad8603be38479e9ffb5bde0b"} Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.526014 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.581205 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb00d9ff-7085-4d77-9ba8-f75d234c48ca" (UID: "cb00d9ff-7085-4d77-9ba8-f75d234c48ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.603289 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.603595 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.603705 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfv6w\" (UniqueName: \"kubernetes.io/projected/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-kube-api-access-qfv6w\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.603780 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.603848 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.616955 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data" (OuterVolumeSpecName: "config-data") pod "cb00d9ff-7085-4d77-9ba8-f75d234c48ca" (UID: "cb00d9ff-7085-4d77-9ba8-f75d234c48ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.683037 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwtsw"] Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.683464 4895 scope.go:117] "RemoveContainer" containerID="9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.693735 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rwtsw"] Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.704391 4895 scope.go:117] "RemoveContainer" containerID="088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.705065 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb00d9ff-7085-4d77-9ba8-f75d234c48ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.740996 4895 scope.go:117] "RemoveContainer" containerID="628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.741457 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6\": container with ID starting with 628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6 not found: ID does not exist" containerID="628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.741527 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6"} err="failed to get container status \"628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6\": rpc error: code = NotFound desc = could not find container \"628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6\": container with ID starting with 628fc069c5273063ff61b5eb109bb68cfaae9d80b25e32503c90ac5adf1a4df6 not found: ID does not exist" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.741554 4895 scope.go:117] "RemoveContainer" containerID="9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.741945 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2\": container with ID starting with 9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2 not found: ID does not exist" containerID="9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.742033 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2"} err="failed to get container status \"9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2\": rpc error: code = NotFound desc = could not find container \"9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2\": container with ID starting with 9b2f96c64839e642db401b0a6532ad618655aaeac639291eedfc5c981a1baec2 not found: ID does not exist" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.742119 4895 scope.go:117] "RemoveContainer" containerID="088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.742657 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468\": container with ID starting with 088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468 not found: ID does not exist" containerID="088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.742739 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468"} err="failed to get container status \"088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468\": rpc error: code = NotFound desc = could not find container \"088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468\": container with ID starting with 088192484cd961880112c76075fc9c060d9b528f57077644dc34387052d50468 not found: ID does not exist" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.742805 4895 scope.go:117] "RemoveContainer" containerID="d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.766429 4895 scope.go:117] "RemoveContainer" containerID="df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.794686 4895 scope.go:117] "RemoveContainer" containerID="d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.798346 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7\": container with ID starting with d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7 not found: ID does not exist" containerID="d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.798389 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7"} err="failed to get container status \"d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7\": rpc error: code = NotFound desc = could not find container \"d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7\": container with ID starting with d10f9691bf8ad12edc767b4cd628895f9915617b7d9c91e31581e48a2b71c5a7 not found: ID does not exist" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.798419 4895 scope.go:117] "RemoveContainer" containerID="df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.799047 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400\": container with ID starting with df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400 not found: ID does not exist" containerID="df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.799093 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400"} err="failed to get container status \"df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400\": rpc error: code = NotFound desc = could not find container \"df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400\": container with ID starting with df1445c998deb5c178e2f0ae0088268d53bf06fce7c4cca3ef4636f7a96ef400 not found: ID does not exist" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.860263 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.868576 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.933460 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.933888 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="cinder-scheduler" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.933908 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="cinder-scheduler" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.933932 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="registry-server" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.933940 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="registry-server" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.933956 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="extract-content" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.933962 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="extract-content" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.933971 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="extract-utilities" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.933977 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="extract-utilities" Dec 06 09:08:55 crc kubenswrapper[4895]: E1206 09:08:55.933989 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="probe" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.933995 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="probe" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.934161 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" containerName="registry-server" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.934182 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="cinder-scheduler" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.934191 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" containerName="probe" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.935173 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.951765 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 09:08:55 crc kubenswrapper[4895]: I1206 09:08:55.970753 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.010803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.010898 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kpt\" (UniqueName: \"kubernetes.io/projected/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-kube-api-access-68kpt\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.010923 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-config-data\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.010947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-scripts\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.010963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.010996 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.069691 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="155719e6-a3ba-4a2b-bc43-6f884d5b2908" path="/var/lib/kubelet/pods/155719e6-a3ba-4a2b-bc43-6f884d5b2908/volumes" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.070368 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb00d9ff-7085-4d77-9ba8-f75d234c48ca" path="/var/lib/kubelet/pods/cb00d9ff-7085-4d77-9ba8-f75d234c48ca/volumes" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112360 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68kpt\" (UniqueName: \"kubernetes.io/projected/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-kube-api-access-68kpt\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112398 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-config-data\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112422 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-scripts\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112438 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112484 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112575 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.112972 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.117767 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.118305 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-config-data\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.118679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-scripts\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.118870 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.128611 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68kpt\" (UniqueName: \"kubernetes.io/projected/bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56-kube-api-access-68kpt\") pod \"cinder-scheduler-0\" (UID: \"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56\") " pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.157964 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.260151 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 09:08:56 crc kubenswrapper[4895]: I1206 09:08:56.699235 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 09:08:57 crc kubenswrapper[4895]: I1206 09:08:57.555835 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56","Type":"ContainerStarted","Data":"0e66925d8e5e19f8219230d5382fb63555da047095ae5d206bee45ac5ec00ed0"} Dec 06 09:08:57 crc kubenswrapper[4895]: I1206 09:08:57.556121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56","Type":"ContainerStarted","Data":"a00e19882de1a43915c4e5f687636d44d427738d9ad8673e279b7e3d8fdb8fab"} Dec 06 09:08:58 crc kubenswrapper[4895]: I1206 09:08:58.579138 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56","Type":"ContainerStarted","Data":"0960685e3a833dd02c277df91e97cc9dcc39e5f05a0c35ee757b9c43ea39dd93"} Dec 06 09:08:58 crc kubenswrapper[4895]: I1206 09:08:58.610537 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.610514153 podStartE2EDuration="3.610514153s" podCreationTimestamp="2025-12-06 09:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:08:58.609189427 +0000 UTC m=+7901.010578317" watchObservedRunningTime="2025-12-06 09:08:58.610514153 +0000 UTC m=+7901.011903043" Dec 06 09:09:00 crc kubenswrapper[4895]: I1206 09:09:00.629773 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 09:09:01 crc kubenswrapper[4895]: I1206 09:09:01.260840 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 09:09:07 crc kubenswrapper[4895]: I1206 09:09:07.011325 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 09:09:29 crc kubenswrapper[4895]: I1206 09:09:29.696445 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:09:29 crc kubenswrapper[4895]: I1206 09:09:29.697120 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:09:31 crc kubenswrapper[4895]: I1206 09:09:31.072892 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-h9dcw"] Dec 06 09:09:31 crc kubenswrapper[4895]: I1206 09:09:31.087051 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0c77-account-create-update-tdrqp"] Dec 06 09:09:31 crc kubenswrapper[4895]: I1206 09:09:31.098089 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0c77-account-create-update-tdrqp"] Dec 06 09:09:31 crc kubenswrapper[4895]: I1206 09:09:31.117769 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-h9dcw"] Dec 06 09:09:32 crc kubenswrapper[4895]: I1206 09:09:32.066325 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfa4223-c0d1-4dbd-94cc-65f200123d19" path="/var/lib/kubelet/pods/1bfa4223-c0d1-4dbd-94cc-65f200123d19/volumes" Dec 06 09:09:32 crc kubenswrapper[4895]: I1206 09:09:32.067909 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366a0162-e23c-45f2-8c00-3718d0c8cfbf" path="/var/lib/kubelet/pods/366a0162-e23c-45f2-8c00-3718d0c8cfbf/volumes" Dec 06 09:09:43 crc kubenswrapper[4895]: I1206 09:09:43.044573 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ttzbm"] Dec 06 09:09:43 crc kubenswrapper[4895]: I1206 09:09:43.052724 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ttzbm"] Dec 06 09:09:44 crc kubenswrapper[4895]: I1206 09:09:44.064855 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e8de9a5-f43e-4960-b492-679e9cb276f3" path="/var/lib/kubelet/pods/4e8de9a5-f43e-4960-b492-679e9cb276f3/volumes" Dec 06 09:09:57 crc kubenswrapper[4895]: I1206 09:09:57.040311 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jj4z9"] Dec 06 09:09:57 crc kubenswrapper[4895]: I1206 09:09:57.048964 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jj4z9"] Dec 06 09:09:58 crc kubenswrapper[4895]: I1206 09:09:58.064806 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eace17d-b195-4e67-bda4-f1a2c830b508" path="/var/lib/kubelet/pods/1eace17d-b195-4e67-bda4-f1a2c830b508/volumes" Dec 06 09:09:59 crc kubenswrapper[4895]: I1206 09:09:59.695914 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:09:59 crc kubenswrapper[4895]: I1206 09:09:59.695996 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:10:16 crc kubenswrapper[4895]: I1206 09:10:16.881116 4895 scope.go:117] "RemoveContainer" containerID="09f4663736ebfe30af3ccd63aafdffd5225ad5ec08673976104a3ef926132850" Dec 06 09:10:16 crc kubenswrapper[4895]: I1206 09:10:16.923104 4895 scope.go:117] "RemoveContainer" containerID="c7e7653d8002a71c3aeeda3730ce21b91798bf09784abfa7a1dd15aeff221918" Dec 06 09:10:16 crc kubenswrapper[4895]: I1206 09:10:16.947294 4895 scope.go:117] "RemoveContainer" containerID="04f79716124a94bf7f9282bddbc1df524e710ceee64800a23322dde82c9ec760" Dec 06 09:10:16 crc kubenswrapper[4895]: I1206 09:10:16.985009 4895 scope.go:117] "RemoveContainer" containerID="039500ee577b8f415dcc704c5b2c38c647fad33cabf125a59744529b6019f04a" Dec 06 09:10:29 crc kubenswrapper[4895]: E1206 09:10:29.685445 4895 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.132:57138->38.129.56.132:44665: write tcp 38.129.56.132:57138->38.129.56.132:44665: write: broken pipe Dec 06 09:10:29 crc kubenswrapper[4895]: I1206 09:10:29.696001 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:10:29 crc kubenswrapper[4895]: I1206 09:10:29.696059 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:10:29 crc kubenswrapper[4895]: I1206 09:10:29.696106 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:10:29 crc kubenswrapper[4895]: I1206 09:10:29.696837 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:10:29 crc kubenswrapper[4895]: I1206 09:10:29.696894 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" gracePeriod=600 Dec 06 09:10:29 crc kubenswrapper[4895]: E1206 09:10:29.850357 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:10:30 crc kubenswrapper[4895]: I1206 09:10:30.763095 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" exitCode=0 Dec 06 09:10:30 crc kubenswrapper[4895]: I1206 09:10:30.763172 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef"} Dec 06 09:10:30 crc kubenswrapper[4895]: I1206 09:10:30.763230 4895 scope.go:117] "RemoveContainer" containerID="6a08bae6a77367b44b45e288465887eadb17c31ec1d0ea904bd0118b481a1ef4" Dec 06 09:10:30 crc kubenswrapper[4895]: I1206 09:10:30.766466 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:10:30 crc kubenswrapper[4895]: E1206 09:10:30.767239 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:10:38 crc kubenswrapper[4895]: E1206 09:10:38.183647 4895 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.132:57176->38.129.56.132:44665: read tcp 38.129.56.132:57176->38.129.56.132:44665: read: connection reset by peer Dec 06 09:10:38 crc kubenswrapper[4895]: E1206 09:10:38.184721 4895 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.132:57176->38.129.56.132:44665: write tcp 38.129.56.132:57176->38.129.56.132:44665: write: broken pipe Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.715147 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bcd4cd89c-hk5gh"] Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.720359 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.733581 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.733792 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.733931 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-992gg" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.734069 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.756150 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bcd4cd89c-hk5gh"] Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.784540 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.784874 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-log" containerID="cri-o://dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32" gracePeriod=30 Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.785452 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-httpd" containerID="cri-o://6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d" gracePeriod=30 Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.824739 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.824995 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-log" containerID="cri-o://590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369" gracePeriod=30 Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.825536 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-httpd" containerID="cri-o://8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683" gracePeriod=30 Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.827087 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a087a078-9055-4c51-a4cc-7c3d86d4fecd-horizon-secret-key\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.827133 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zh8\" (UniqueName: \"kubernetes.io/projected/a087a078-9055-4c51-a4cc-7c3d86d4fecd-kube-api-access-g5zh8\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.827151 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-scripts\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.827192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a087a078-9055-4c51-a4cc-7c3d86d4fecd-logs\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.827233 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-config-data\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.836614 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c84d5b777-ndnrf"] Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.838836 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.876172 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c84d5b777-ndnrf"] Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.929410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-config-data\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.929932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a087a078-9055-4c51-a4cc-7c3d86d4fecd-horizon-secret-key\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.930317 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zh8\" (UniqueName: \"kubernetes.io/projected/a087a078-9055-4c51-a4cc-7c3d86d4fecd-kube-api-access-g5zh8\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.930448 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-scripts\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.930632 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a087a078-9055-4c51-a4cc-7c3d86d4fecd-logs\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.931244 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a087a078-9055-4c51-a4cc-7c3d86d4fecd-logs\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.931436 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-config-data\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.932274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-scripts\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.951108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a087a078-9055-4c51-a4cc-7c3d86d4fecd-horizon-secret-key\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:39 crc kubenswrapper[4895]: I1206 09:10:39.955297 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zh8\" (UniqueName: \"kubernetes.io/projected/a087a078-9055-4c51-a4cc-7c3d86d4fecd-kube-api-access-g5zh8\") pod \"horizon-7bcd4cd89c-hk5gh\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.032062 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-scripts\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.032151 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53114e3-eac5-40d1-ac02-35bb2f216687-logs\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.032215 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c53114e3-eac5-40d1-ac02-35bb2f216687-horizon-secret-key\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.032683 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rcc\" (UniqueName: \"kubernetes.io/projected/c53114e3-eac5-40d1-ac02-35bb2f216687-kube-api-access-g9rcc\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.032914 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-config-data\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.067551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.134024 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-scripts\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.134112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53114e3-eac5-40d1-ac02-35bb2f216687-logs\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.134175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c53114e3-eac5-40d1-ac02-35bb2f216687-horizon-secret-key\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.134248 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9rcc\" (UniqueName: \"kubernetes.io/projected/c53114e3-eac5-40d1-ac02-35bb2f216687-kube-api-access-g9rcc\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.134300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-config-data\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.135066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53114e3-eac5-40d1-ac02-35bb2f216687-logs\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.137434 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-scripts\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.138036 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-config-data\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.139139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c53114e3-eac5-40d1-ac02-35bb2f216687-horizon-secret-key\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.151397 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9rcc\" (UniqueName: \"kubernetes.io/projected/c53114e3-eac5-40d1-ac02-35bb2f216687-kube-api-access-g9rcc\") pod \"horizon-7c84d5b777-ndnrf\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.173338 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.455058 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bcd4cd89c-hk5gh"] Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.509666 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57b6458b8f-mj9hj"] Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.511488 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.545843 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9l2s\" (UniqueName: \"kubernetes.io/projected/e9e07b07-5f94-4787-88fb-2daf876f572c-kube-api-access-p9l2s\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.545906 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-scripts\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.545990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9e07b07-5f94-4787-88fb-2daf876f572c-horizon-secret-key\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.546144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-config-data\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.546199 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e07b07-5f94-4787-88fb-2daf876f572c-logs\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.551574 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b6458b8f-mj9hj"] Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.613116 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bcd4cd89c-hk5gh"] Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.648220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9e07b07-5f94-4787-88fb-2daf876f572c-horizon-secret-key\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.648363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-config-data\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.648402 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e07b07-5f94-4787-88fb-2daf876f572c-logs\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.648430 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9l2s\" (UniqueName: \"kubernetes.io/projected/e9e07b07-5f94-4787-88fb-2daf876f572c-kube-api-access-p9l2s\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.648465 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-scripts\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.649420 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-scripts\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.649691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e07b07-5f94-4787-88fb-2daf876f572c-logs\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.650098 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-config-data\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.654769 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9e07b07-5f94-4787-88fb-2daf876f572c-horizon-secret-key\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.664696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9l2s\" (UniqueName: \"kubernetes.io/projected/e9e07b07-5f94-4787-88fb-2daf876f572c-kube-api-access-p9l2s\") pod \"horizon-57b6458b8f-mj9hj\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: W1206 09:10:40.758079 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc53114e3_eac5_40d1_ac02_35bb2f216687.slice/crio-ad04ceed1f1c1659b43821529dee97e428e9868ead7508d414a3770c5d397132 WatchSource:0}: Error finding container ad04ceed1f1c1659b43821529dee97e428e9868ead7508d414a3770c5d397132: Status 404 returned error can't find the container with id ad04ceed1f1c1659b43821529dee97e428e9868ead7508d414a3770c5d397132 Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.764621 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c84d5b777-ndnrf"] Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.845213 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.897313 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcd4cd89c-hk5gh" event={"ID":"a087a078-9055-4c51-a4cc-7c3d86d4fecd","Type":"ContainerStarted","Data":"b02798a6f2a89c269fa667da992a7516dd94f152b187d168e9b0c9cea479f993"} Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.900612 4895 generic.go:334] "Generic (PLEG): container finished" podID="a81b9697-f347-4805-99b1-eb21de602965" containerID="590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369" exitCode=143 Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.900664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a81b9697-f347-4805-99b1-eb21de602965","Type":"ContainerDied","Data":"590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369"} Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.901939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c84d5b777-ndnrf" event={"ID":"c53114e3-eac5-40d1-ac02-35bb2f216687","Type":"ContainerStarted","Data":"ad04ceed1f1c1659b43821529dee97e428e9868ead7508d414a3770c5d397132"} Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.903779 4895 generic.go:334] "Generic (PLEG): container finished" podID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerID="dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32" exitCode=143 Dec 06 09:10:40 crc kubenswrapper[4895]: I1206 09:10:40.903812 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09a717e6-54eb-4b96-903d-7f4c4eb4bb96","Type":"ContainerDied","Data":"dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32"} Dec 06 09:10:41 crc kubenswrapper[4895]: I1206 09:10:41.348901 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b6458b8f-mj9hj"] Dec 06 09:10:41 crc kubenswrapper[4895]: I1206 09:10:41.915678 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b6458b8f-mj9hj" event={"ID":"e9e07b07-5f94-4787-88fb-2daf876f572c","Type":"ContainerStarted","Data":"fe21735b8543cf53f899d42bf7f92e032b7c08f8efebf3a838d18d393703a188"} Dec 06 09:10:42 crc kubenswrapper[4895]: I1206 09:10:42.051008 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:10:42 crc kubenswrapper[4895]: E1206 09:10:42.051241 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.580552 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608615 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-scripts\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608683 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-config-data\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608722 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvptq\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-kube-api-access-mvptq\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608763 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-ceph\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608796 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-logs\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608841 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-httpd-run\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.608885 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-combined-ca-bundle\") pod \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\" (UID: \"09a717e6-54eb-4b96-903d-7f4c4eb4bb96\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.616525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-logs" (OuterVolumeSpecName: "logs") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.616795 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.618562 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-kube-api-access-mvptq" (OuterVolumeSpecName: "kube-api-access-mvptq") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "kube-api-access-mvptq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.618845 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-scripts" (OuterVolumeSpecName: "scripts") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.619321 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-ceph" (OuterVolumeSpecName: "ceph") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.662955 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.671132 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.710699 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8zx5\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-kube-api-access-d8zx5\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.710753 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-ceph\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.710832 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-combined-ca-bundle\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.710875 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-httpd-run\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.710931 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-config-data\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.710991 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-scripts\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711013 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-logs\") pod \"a81b9697-f347-4805-99b1-eb21de602965\" (UID: \"a81b9697-f347-4805-99b1-eb21de602965\") " Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711711 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711732 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711744 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvptq\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-kube-api-access-mvptq\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711760 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711773 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711784 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.711871 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.712236 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-logs" (OuterVolumeSpecName: "logs") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.713902 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-kube-api-access-d8zx5" (OuterVolumeSpecName: "kube-api-access-d8zx5") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "kube-api-access-d8zx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.714645 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-ceph" (OuterVolumeSpecName: "ceph") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.715311 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-config-data" (OuterVolumeSpecName: "config-data") pod "09a717e6-54eb-4b96-903d-7f4c4eb4bb96" (UID: "09a717e6-54eb-4b96-903d-7f4c4eb4bb96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.719145 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-scripts" (OuterVolumeSpecName: "scripts") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.764219 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.775598 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-config-data" (OuterVolumeSpecName: "config-data") pod "a81b9697-f347-4805-99b1-eb21de602965" (UID: "a81b9697-f347-4805-99b1-eb21de602965"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.813450 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a717e6-54eb-4b96-903d-7f4c4eb4bb96-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.813713 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.813781 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.813917 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8zx5\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-kube-api-access-d8zx5\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.813975 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a81b9697-f347-4805-99b1-eb21de602965-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.814026 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.814083 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81b9697-f347-4805-99b1-eb21de602965-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.814141 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81b9697-f347-4805-99b1-eb21de602965-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.962331 4895 generic.go:334] "Generic (PLEG): container finished" podID="a81b9697-f347-4805-99b1-eb21de602965" containerID="8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683" exitCode=0 Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.962400 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a81b9697-f347-4805-99b1-eb21de602965","Type":"ContainerDied","Data":"8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683"} Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.962428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a81b9697-f347-4805-99b1-eb21de602965","Type":"ContainerDied","Data":"6d96b32cd64f7ba9e6e6c588c6fa2d51c057d65cc9f672ff33826ddc8a7efda4"} Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.962445 4895 scope.go:117] "RemoveContainer" containerID="8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.962632 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.968978 4895 generic.go:334] "Generic (PLEG): container finished" podID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerID="6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d" exitCode=0 Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.969026 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09a717e6-54eb-4b96-903d-7f4c4eb4bb96","Type":"ContainerDied","Data":"6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d"} Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.969056 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09a717e6-54eb-4b96-903d-7f4c4eb4bb96","Type":"ContainerDied","Data":"585abe516277927de89a39ae8c3fc3226d1a491751e77a18a48333d6d4a5a84b"} Dec 06 09:10:43 crc kubenswrapper[4895]: I1206 09:10:43.969117 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.000595 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.015333 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.030534 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: E1206 09:10:44.030991 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-log" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031007 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-log" Dec 06 09:10:44 crc kubenswrapper[4895]: E1206 09:10:44.031017 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-log" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031024 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-log" Dec 06 09:10:44 crc kubenswrapper[4895]: E1206 09:10:44.031043 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-httpd" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031050 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-httpd" Dec 06 09:10:44 crc kubenswrapper[4895]: E1206 09:10:44.031097 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-httpd" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031104 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-httpd" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031282 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-log" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031299 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-httpd" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031310 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81b9697-f347-4805-99b1-eb21de602965" containerName="glance-log" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.031326 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" containerName="glance-httpd" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.032337 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.050604 4895 scope.go:117] "RemoveContainer" containerID="590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.051003 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8t2wf" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.051861 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.051905 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.088877 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81b9697-f347-4805-99b1-eb21de602965" path="/var/lib/kubelet/pods/a81b9697-f347-4805-99b1-eb21de602965/volumes" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.089653 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.089693 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.117317 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.134743 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.137028 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.138921 4895 scope.go:117] "RemoveContainer" containerID="8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.140607 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 09:10:44 crc kubenswrapper[4895]: E1206 09:10:44.140630 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683\": container with ID starting with 8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683 not found: ID does not exist" containerID="8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.140781 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683"} err="failed to get container status \"8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683\": rpc error: code = NotFound desc = could not find container \"8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683\": container with ID starting with 8b11a91c8849de0b070469b77d03f2e11b48f2b390c63035ca9b74652ce89683 not found: ID does not exist" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.140823 4895 scope.go:117] "RemoveContainer" containerID="590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369" Dec 06 09:10:44 crc kubenswrapper[4895]: E1206 09:10:44.145753 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369\": container with ID starting with 590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369 not found: ID does not exist" containerID="590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.145797 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369"} err="failed to get container status \"590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369\": rpc error: code = NotFound desc = could not find container \"590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369\": container with ID starting with 590d89714fa8b5dce42072c30d38b26490195aec7dfa7937421207e7768ab369 not found: ID does not exist" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.145831 4895 scope.go:117] "RemoveContainer" containerID="6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.156421 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.202892 4895 scope.go:117] "RemoveContainer" containerID="dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.226441 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.226525 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbcv\" (UniqueName: \"kubernetes.io/projected/00465a36-698f-4971-ae7b-8f4c38423896-kube-api-access-blbcv\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.226826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.226905 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.226946 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00465a36-698f-4971-ae7b-8f4c38423896-logs\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.227117 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00465a36-698f-4971-ae7b-8f4c38423896-ceph\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.227449 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00465a36-698f-4971-ae7b-8f4c38423896-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.341806 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00465a36-698f-4971-ae7b-8f4c38423896-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-scripts\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342279 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwvdk\" (UniqueName: \"kubernetes.io/projected/37f3bb32-a888-4492-ae83-1e0302694950-kube-api-access-rwvdk\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342323 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342359 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-config-data\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342368 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00465a36-698f-4971-ae7b-8f4c38423896-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f3bb32-a888-4492-ae83-1e0302694950-logs\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342562 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37f3bb32-a888-4492-ae83-1e0302694950-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342627 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blbcv\" (UniqueName: \"kubernetes.io/projected/00465a36-698f-4971-ae7b-8f4c38423896-kube-api-access-blbcv\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342698 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00465a36-698f-4971-ae7b-8f4c38423896-logs\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00465a36-698f-4971-ae7b-8f4c38423896-ceph\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.342927 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/37f3bb32-a888-4492-ae83-1e0302694950-ceph\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.343369 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00465a36-698f-4971-ae7b-8f4c38423896-logs\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.354060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00465a36-698f-4971-ae7b-8f4c38423896-ceph\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.360912 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.361324 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.363043 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blbcv\" (UniqueName: \"kubernetes.io/projected/00465a36-698f-4971-ae7b-8f4c38423896-kube-api-access-blbcv\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.378541 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00465a36-698f-4971-ae7b-8f4c38423896-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00465a36-698f-4971-ae7b-8f4c38423896\") " pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.429157 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.444835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f3bb32-a888-4492-ae83-1e0302694950-logs\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.444927 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-config-data\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.444968 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.444985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37f3bb32-a888-4492-ae83-1e0302694950-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.445069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/37f3bb32-a888-4492-ae83-1e0302694950-ceph\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.445116 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f3bb32-a888-4492-ae83-1e0302694950-logs\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.445136 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-scripts\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.445465 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37f3bb32-a888-4492-ae83-1e0302694950-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.446522 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwvdk\" (UniqueName: \"kubernetes.io/projected/37f3bb32-a888-4492-ae83-1e0302694950-kube-api-access-rwvdk\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.448309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/37f3bb32-a888-4492-ae83-1e0302694950-ceph\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.448553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-config-data\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.448956 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.451036 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f3bb32-a888-4492-ae83-1e0302694950-scripts\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.467553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwvdk\" (UniqueName: \"kubernetes.io/projected/37f3bb32-a888-4492-ae83-1e0302694950-kube-api-access-rwvdk\") pod \"glance-default-external-api-0\" (UID: \"37f3bb32-a888-4492-ae83-1e0302694950\") " pod="openstack/glance-default-external-api-0" Dec 06 09:10:44 crc kubenswrapper[4895]: I1206 09:10:44.758185 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 09:10:46 crc kubenswrapper[4895]: I1206 09:10:46.068445 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a717e6-54eb-4b96-903d-7f4c4eb4bb96" path="/var/lib/kubelet/pods/09a717e6-54eb-4b96-903d-7f4c4eb4bb96/volumes" Dec 06 09:10:49 crc kubenswrapper[4895]: I1206 09:10:49.088836 4895 scope.go:117] "RemoveContainer" containerID="6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d" Dec 06 09:10:49 crc kubenswrapper[4895]: E1206 09:10:49.090262 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d\": container with ID starting with 6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d not found: ID does not exist" containerID="6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d" Dec 06 09:10:49 crc kubenswrapper[4895]: I1206 09:10:49.090320 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d"} err="failed to get container status \"6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d\": rpc error: code = NotFound desc = could not find container \"6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d\": container with ID starting with 6e338e8680df6ac273ad153cfb2ccd95cb7476e8eada9e81b9c30ec7cefa067d not found: ID does not exist" Dec 06 09:10:49 crc kubenswrapper[4895]: I1206 09:10:49.090359 4895 scope.go:117] "RemoveContainer" containerID="dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32" Dec 06 09:10:49 crc kubenswrapper[4895]: E1206 09:10:49.091506 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32\": container with ID starting with dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32 not found: ID does not exist" containerID="dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32" Dec 06 09:10:49 crc kubenswrapper[4895]: I1206 09:10:49.091564 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32"} err="failed to get container status \"dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32\": rpc error: code = NotFound desc = could not find container \"dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32\": container with ID starting with dd6fe7684aef1ce8a38dbb7087ff6e70616cc01823cab4032a10f68987b10e32 not found: ID does not exist" Dec 06 09:10:49 crc kubenswrapper[4895]: W1206 09:10:49.885026 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37f3bb32_a888_4492_ae83_1e0302694950.slice/crio-fba524948e5fb82883d21692fbc1b2e1bdac6747a26dd864dc5579573a48ef9c WatchSource:0}: Error finding container fba524948e5fb82883d21692fbc1b2e1bdac6747a26dd864dc5579573a48ef9c: Status 404 returned error can't find the container with id fba524948e5fb82883d21692fbc1b2e1bdac6747a26dd864dc5579573a48ef9c Dec 06 09:10:49 crc kubenswrapper[4895]: I1206 09:10:49.887101 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 09:10:49 crc kubenswrapper[4895]: W1206 09:10:49.982272 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00465a36_698f_4971_ae7b_8f4c38423896.slice/crio-0359aa9d95c951edac2861acfdd08745ead9b9ba1e2dc9adee54ffa7126bb5c0 WatchSource:0}: Error finding container 0359aa9d95c951edac2861acfdd08745ead9b9ba1e2dc9adee54ffa7126bb5c0: Status 404 returned error can't find the container with id 0359aa9d95c951edac2861acfdd08745ead9b9ba1e2dc9adee54ffa7126bb5c0 Dec 06 09:10:49 crc kubenswrapper[4895]: I1206 09:10:49.983079 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.064384 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bcd4cd89c-hk5gh" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon-log" containerID="cri-o://f03583912fa76b01a3b88f036215cc33db60de04c134b6b335b3da261ff05871" gracePeriod=30 Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.064706 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bcd4cd89c-hk5gh" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon" containerID="cri-o://7a50a6494611d40e5817be8ef65e8695911fe2c2cd0e4cd03b9a4e3fc3f2ffa5" gracePeriod=30 Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c84d5b777-ndnrf" event={"ID":"c53114e3-eac5-40d1-ac02-35bb2f216687","Type":"ContainerStarted","Data":"707815af6857c5edcdccbd514c03ea2a1818ad95742d1d3456c718828e4d8b2c"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c84d5b777-ndnrf" event={"ID":"c53114e3-eac5-40d1-ac02-35bb2f216687","Type":"ContainerStarted","Data":"db40b8fe12626d3474b39b7fea3311d64f78121aa32ccf41d3544b50efdc0dcc"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070413 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37f3bb32-a888-4492-ae83-1e0302694950","Type":"ContainerStarted","Data":"fba524948e5fb82883d21692fbc1b2e1bdac6747a26dd864dc5579573a48ef9c"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070494 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00465a36-698f-4971-ae7b-8f4c38423896","Type":"ContainerStarted","Data":"0359aa9d95c951edac2861acfdd08745ead9b9ba1e2dc9adee54ffa7126bb5c0"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070509 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcd4cd89c-hk5gh" event={"ID":"a087a078-9055-4c51-a4cc-7c3d86d4fecd","Type":"ContainerStarted","Data":"7a50a6494611d40e5817be8ef65e8695911fe2c2cd0e4cd03b9a4e3fc3f2ffa5"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcd4cd89c-hk5gh" event={"ID":"a087a078-9055-4c51-a4cc-7c3d86d4fecd","Type":"ContainerStarted","Data":"f03583912fa76b01a3b88f036215cc33db60de04c134b6b335b3da261ff05871"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070531 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b6458b8f-mj9hj" event={"ID":"e9e07b07-5f94-4787-88fb-2daf876f572c","Type":"ContainerStarted","Data":"cca43bf0d3e36cab245952ebee5fa85a6d032cc75c467d766338ca9bf240ed77"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.070550 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b6458b8f-mj9hj" event={"ID":"e9e07b07-5f94-4787-88fb-2daf876f572c","Type":"ContainerStarted","Data":"2c1e2855057785f4adb79b59abc1e8177e81c638b3c7826cb7b13626674bfe16"} Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.078972 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c84d5b777-ndnrf" podStartSLOduration=2.42082023 podStartE2EDuration="11.078953592s" podCreationTimestamp="2025-12-06 09:10:39 +0000 UTC" firstStartedPulling="2025-12-06 09:10:40.762564585 +0000 UTC m=+8003.163953465" lastFinishedPulling="2025-12-06 09:10:49.420697957 +0000 UTC m=+8011.822086827" observedRunningTime="2025-12-06 09:10:50.073949997 +0000 UTC m=+8012.475338867" watchObservedRunningTime="2025-12-06 09:10:50.078953592 +0000 UTC m=+8012.480342462" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.104948 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bcd4cd89c-hk5gh" podStartSLOduration=2.377038244 podStartE2EDuration="11.10493445s" podCreationTimestamp="2025-12-06 09:10:39 +0000 UTC" firstStartedPulling="2025-12-06 09:10:40.62615537 +0000 UTC m=+8003.027544240" lastFinishedPulling="2025-12-06 09:10:49.354051576 +0000 UTC m=+8011.755440446" observedRunningTime="2025-12-06 09:10:50.10231379 +0000 UTC m=+8012.503702660" watchObservedRunningTime="2025-12-06 09:10:50.10493445 +0000 UTC m=+8012.506323320" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.122055 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57b6458b8f-mj9hj" podStartSLOduration=2.128453002 podStartE2EDuration="10.12203808s" podCreationTimestamp="2025-12-06 09:10:40 +0000 UTC" firstStartedPulling="2025-12-06 09:10:41.363292384 +0000 UTC m=+8003.764681254" lastFinishedPulling="2025-12-06 09:10:49.356877462 +0000 UTC m=+8011.758266332" observedRunningTime="2025-12-06 09:10:50.118887285 +0000 UTC m=+8012.520276155" watchObservedRunningTime="2025-12-06 09:10:50.12203808 +0000 UTC m=+8012.523426950" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.201825 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.202100 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.846327 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:50 crc kubenswrapper[4895]: I1206 09:10:50.846829 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:10:51 crc kubenswrapper[4895]: I1206 09:10:51.077226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37f3bb32-a888-4492-ae83-1e0302694950","Type":"ContainerStarted","Data":"53e54fc708b3bb5d59bef2ceaa54ee1070f3dedcf8eeffd00163946d43a2e32d"} Dec 06 09:10:51 crc kubenswrapper[4895]: I1206 09:10:51.078661 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00465a36-698f-4971-ae7b-8f4c38423896","Type":"ContainerStarted","Data":"a36ba313b985a2e70602010f3ca7ffade1c614aaa2dc47518a907bee86364118"} Dec 06 09:10:52 crc kubenswrapper[4895]: I1206 09:10:52.098127 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37f3bb32-a888-4492-ae83-1e0302694950","Type":"ContainerStarted","Data":"415707ae198e56c81afc71c1edc7be89ebd19c9c729f3c4968b02c7f4ae012ca"} Dec 06 09:10:52 crc kubenswrapper[4895]: I1206 09:10:52.110014 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00465a36-698f-4971-ae7b-8f4c38423896","Type":"ContainerStarted","Data":"8ea0625137a6fdebec7c52dc6f515100ef8a09e5424b96cc59724809010e9b11"} Dec 06 09:10:52 crc kubenswrapper[4895]: I1206 09:10:52.177315 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.177291936 podStartE2EDuration="9.177291936s" podCreationTimestamp="2025-12-06 09:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:10:52.166728503 +0000 UTC m=+8014.568117413" watchObservedRunningTime="2025-12-06 09:10:52.177291936 +0000 UTC m=+8014.578680796" Dec 06 09:10:52 crc kubenswrapper[4895]: I1206 09:10:52.181371 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.181356666 podStartE2EDuration="8.181356666s" podCreationTimestamp="2025-12-06 09:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:10:52.138408712 +0000 UTC m=+8014.539797622" watchObservedRunningTime="2025-12-06 09:10:52.181356666 +0000 UTC m=+8014.582745536" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.050833 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:10:54 crc kubenswrapper[4895]: E1206 09:10:54.051243 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.429555 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.429901 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.469935 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.486600 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.769100 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.769368 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.802667 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 09:10:54 crc kubenswrapper[4895]: I1206 09:10:54.811544 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 09:10:55 crc kubenswrapper[4895]: I1206 09:10:55.150248 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:55 crc kubenswrapper[4895]: I1206 09:10:55.150280 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:55 crc kubenswrapper[4895]: I1206 09:10:55.150291 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 09:10:55 crc kubenswrapper[4895]: I1206 09:10:55.150487 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 09:10:57 crc kubenswrapper[4895]: I1206 09:10:57.185079 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 09:10:57 crc kubenswrapper[4895]: I1206 09:10:57.186810 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 09:10:57 crc kubenswrapper[4895]: I1206 09:10:57.787196 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:57 crc kubenswrapper[4895]: I1206 09:10:57.885281 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 09:10:58 crc kubenswrapper[4895]: I1206 09:10:58.197569 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 09:10:58 crc kubenswrapper[4895]: I1206 09:10:58.197612 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 09:10:58 crc kubenswrapper[4895]: I1206 09:10:58.261058 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 09:10:58 crc kubenswrapper[4895]: I1206 09:10:58.268787 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 09:11:00 crc kubenswrapper[4895]: I1206 09:11:00.176671 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c84d5b777-ndnrf" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Dec 06 09:11:00 crc kubenswrapper[4895]: I1206 09:11:00.848153 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57b6458b8f-mj9hj" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 06 09:11:06 crc kubenswrapper[4895]: I1206 09:11:06.056773 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:11:06 crc kubenswrapper[4895]: E1206 09:11:06.057406 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:11:12 crc kubenswrapper[4895]: I1206 09:11:12.295232 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:11:12 crc kubenswrapper[4895]: I1206 09:11:12.863274 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:11:13 crc kubenswrapper[4895]: I1206 09:11:13.973928 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:11:14 crc kubenswrapper[4895]: I1206 09:11:14.683039 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:11:14 crc kubenswrapper[4895]: I1206 09:11:14.770332 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c84d5b777-ndnrf"] Dec 06 09:11:14 crc kubenswrapper[4895]: I1206 09:11:14.770611 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c84d5b777-ndnrf" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon-log" containerID="cri-o://db40b8fe12626d3474b39b7fea3311d64f78121aa32ccf41d3544b50efdc0dcc" gracePeriod=30 Dec 06 09:11:14 crc kubenswrapper[4895]: I1206 09:11:14.770743 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c84d5b777-ndnrf" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" containerID="cri-o://707815af6857c5edcdccbd514c03ea2a1818ad95742d1d3456c718828e4d8b2c" gracePeriod=30 Dec 06 09:11:18 crc kubenswrapper[4895]: I1206 09:11:18.415358 4895 generic.go:334] "Generic (PLEG): container finished" podID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerID="707815af6857c5edcdccbd514c03ea2a1818ad95742d1d3456c718828e4d8b2c" exitCode=0 Dec 06 09:11:18 crc kubenswrapper[4895]: I1206 09:11:18.415620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c84d5b777-ndnrf" event={"ID":"c53114e3-eac5-40d1-ac02-35bb2f216687","Type":"ContainerDied","Data":"707815af6857c5edcdccbd514c03ea2a1818ad95742d1d3456c718828e4d8b2c"} Dec 06 09:11:19 crc kubenswrapper[4895]: I1206 09:11:19.051769 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:11:19 crc kubenswrapper[4895]: E1206 09:11:19.052280 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.174895 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c84d5b777-ndnrf" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.483151 4895 generic.go:334] "Generic (PLEG): container finished" podID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerID="7a50a6494611d40e5817be8ef65e8695911fe2c2cd0e4cd03b9a4e3fc3f2ffa5" exitCode=137 Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.483204 4895 generic.go:334] "Generic (PLEG): container finished" podID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerID="f03583912fa76b01a3b88f036215cc33db60de04c134b6b335b3da261ff05871" exitCode=137 Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.483229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcd4cd89c-hk5gh" event={"ID":"a087a078-9055-4c51-a4cc-7c3d86d4fecd","Type":"ContainerDied","Data":"7a50a6494611d40e5817be8ef65e8695911fe2c2cd0e4cd03b9a4e3fc3f2ffa5"} Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.483291 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcd4cd89c-hk5gh" event={"ID":"a087a078-9055-4c51-a4cc-7c3d86d4fecd","Type":"ContainerDied","Data":"f03583912fa76b01a3b88f036215cc33db60de04c134b6b335b3da261ff05871"} Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.613929 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.706734 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a087a078-9055-4c51-a4cc-7c3d86d4fecd-logs\") pod \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.706835 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zh8\" (UniqueName: \"kubernetes.io/projected/a087a078-9055-4c51-a4cc-7c3d86d4fecd-kube-api-access-g5zh8\") pod \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.706917 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-scripts\") pod \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.707073 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-config-data\") pod \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.707142 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a087a078-9055-4c51-a4cc-7c3d86d4fecd-horizon-secret-key\") pod \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\" (UID: \"a087a078-9055-4c51-a4cc-7c3d86d4fecd\") " Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.707603 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a087a078-9055-4c51-a4cc-7c3d86d4fecd-logs" (OuterVolumeSpecName: "logs") pod "a087a078-9055-4c51-a4cc-7c3d86d4fecd" (UID: "a087a078-9055-4c51-a4cc-7c3d86d4fecd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.713363 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a087a078-9055-4c51-a4cc-7c3d86d4fecd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a087a078-9055-4c51-a4cc-7c3d86d4fecd" (UID: "a087a078-9055-4c51-a4cc-7c3d86d4fecd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.721927 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a087a078-9055-4c51-a4cc-7c3d86d4fecd-kube-api-access-g5zh8" (OuterVolumeSpecName: "kube-api-access-g5zh8") pod "a087a078-9055-4c51-a4cc-7c3d86d4fecd" (UID: "a087a078-9055-4c51-a4cc-7c3d86d4fecd"). InnerVolumeSpecName "kube-api-access-g5zh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.733743 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-config-data" (OuterVolumeSpecName: "config-data") pod "a087a078-9055-4c51-a4cc-7c3d86d4fecd" (UID: "a087a078-9055-4c51-a4cc-7c3d86d4fecd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.753888 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-scripts" (OuterVolumeSpecName: "scripts") pod "a087a078-9055-4c51-a4cc-7c3d86d4fecd" (UID: "a087a078-9055-4c51-a4cc-7c3d86d4fecd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.809621 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.809659 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a087a078-9055-4c51-a4cc-7c3d86d4fecd-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.809674 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a087a078-9055-4c51-a4cc-7c3d86d4fecd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.809690 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a087a078-9055-4c51-a4cc-7c3d86d4fecd-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:20 crc kubenswrapper[4895]: I1206 09:11:20.809704 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zh8\" (UniqueName: \"kubernetes.io/projected/a087a078-9055-4c51-a4cc-7c3d86d4fecd-kube-api-access-g5zh8\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:21 crc kubenswrapper[4895]: I1206 09:11:21.520322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcd4cd89c-hk5gh" event={"ID":"a087a078-9055-4c51-a4cc-7c3d86d4fecd","Type":"ContainerDied","Data":"b02798a6f2a89c269fa667da992a7516dd94f152b187d168e9b0c9cea479f993"} Dec 06 09:11:21 crc kubenswrapper[4895]: I1206 09:11:21.520528 4895 scope.go:117] "RemoveContainer" containerID="7a50a6494611d40e5817be8ef65e8695911fe2c2cd0e4cd03b9a4e3fc3f2ffa5" Dec 06 09:11:21 crc kubenswrapper[4895]: I1206 09:11:21.520488 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcd4cd89c-hk5gh" Dec 06 09:11:21 crc kubenswrapper[4895]: I1206 09:11:21.566960 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bcd4cd89c-hk5gh"] Dec 06 09:11:21 crc kubenswrapper[4895]: I1206 09:11:21.574378 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bcd4cd89c-hk5gh"] Dec 06 09:11:21 crc kubenswrapper[4895]: I1206 09:11:21.686439 4895 scope.go:117] "RemoveContainer" containerID="f03583912fa76b01a3b88f036215cc33db60de04c134b6b335b3da261ff05871" Dec 06 09:11:22 crc kubenswrapper[4895]: I1206 09:11:22.061761 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" path="/var/lib/kubelet/pods/a087a078-9055-4c51-a4cc-7c3d86d4fecd/volumes" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.661580 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c89759895-d7j8d"] Dec 06 09:11:23 crc kubenswrapper[4895]: E1206 09:11:23.662002 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon-log" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.662015 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon-log" Dec 06 09:11:23 crc kubenswrapper[4895]: E1206 09:11:23.662031 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.662036 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.662227 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon-log" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.662254 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a087a078-9055-4c51-a4cc-7c3d86d4fecd" containerName="horizon" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.663218 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.683488 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c89759895-d7j8d"] Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.806618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca26141-3036-4a4c-896d-671c9fc24037-scripts\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.806704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ca26141-3036-4a4c-896d-671c9fc24037-logs\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.806735 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8v8\" (UniqueName: \"kubernetes.io/projected/1ca26141-3036-4a4c-896d-671c9fc24037-kube-api-access-qx8v8\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.806831 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ca26141-3036-4a4c-896d-671c9fc24037-horizon-secret-key\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.806903 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ca26141-3036-4a4c-896d-671c9fc24037-config-data\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.909052 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca26141-3036-4a4c-896d-671c9fc24037-scripts\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.909131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ca26141-3036-4a4c-896d-671c9fc24037-logs\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.909161 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8v8\" (UniqueName: \"kubernetes.io/projected/1ca26141-3036-4a4c-896d-671c9fc24037-kube-api-access-qx8v8\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.909238 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ca26141-3036-4a4c-896d-671c9fc24037-horizon-secret-key\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.909289 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ca26141-3036-4a4c-896d-671c9fc24037-config-data\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.911295 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ca26141-3036-4a4c-896d-671c9fc24037-logs\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.911553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca26141-3036-4a4c-896d-671c9fc24037-scripts\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.912123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ca26141-3036-4a4c-896d-671c9fc24037-config-data\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.917266 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ca26141-3036-4a4c-896d-671c9fc24037-horizon-secret-key\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.927181 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8v8\" (UniqueName: \"kubernetes.io/projected/1ca26141-3036-4a4c-896d-671c9fc24037-kube-api-access-qx8v8\") pod \"horizon-7c89759895-d7j8d\" (UID: \"1ca26141-3036-4a4c-896d-671c9fc24037\") " pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:23 crc kubenswrapper[4895]: I1206 09:11:23.982775 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:24 crc kubenswrapper[4895]: I1206 09:11:24.519373 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c89759895-d7j8d"] Dec 06 09:11:24 crc kubenswrapper[4895]: I1206 09:11:24.565845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c89759895-d7j8d" event={"ID":"1ca26141-3036-4a4c-896d-671c9fc24037","Type":"ContainerStarted","Data":"f1bdf58ea876e1be05102e3ae5c6d590ae906b294114af2fbeb46c0ee24a3408"} Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.063490 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-pmnxk"] Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.065467 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.075523 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-pmnxk"] Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.132380 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdt4\" (UniqueName: \"kubernetes.io/projected/dddc29b8-1da4-40de-9bd3-076f4276f53d-kube-api-access-zqdt4\") pod \"heat-db-create-pmnxk\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.132429 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddc29b8-1da4-40de-9bd3-076f4276f53d-operator-scripts\") pod \"heat-db-create-pmnxk\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.171814 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-1a7e-account-create-update-lgpss"] Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.173619 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.175831 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.206764 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1a7e-account-create-update-lgpss"] Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.234267 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc2415b2-e46f-4d36-8433-ffcc83f63db8-operator-scripts\") pod \"heat-1a7e-account-create-update-lgpss\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.234439 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8r6r\" (UniqueName: \"kubernetes.io/projected/dc2415b2-e46f-4d36-8433-ffcc83f63db8-kube-api-access-p8r6r\") pod \"heat-1a7e-account-create-update-lgpss\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.234513 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdt4\" (UniqueName: \"kubernetes.io/projected/dddc29b8-1da4-40de-9bd3-076f4276f53d-kube-api-access-zqdt4\") pod \"heat-db-create-pmnxk\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.234543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddc29b8-1da4-40de-9bd3-076f4276f53d-operator-scripts\") pod \"heat-db-create-pmnxk\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.235392 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddc29b8-1da4-40de-9bd3-076f4276f53d-operator-scripts\") pod \"heat-db-create-pmnxk\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.267353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdt4\" (UniqueName: \"kubernetes.io/projected/dddc29b8-1da4-40de-9bd3-076f4276f53d-kube-api-access-zqdt4\") pod \"heat-db-create-pmnxk\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.336605 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8r6r\" (UniqueName: \"kubernetes.io/projected/dc2415b2-e46f-4d36-8433-ffcc83f63db8-kube-api-access-p8r6r\") pod \"heat-1a7e-account-create-update-lgpss\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.336697 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc2415b2-e46f-4d36-8433-ffcc83f63db8-operator-scripts\") pod \"heat-1a7e-account-create-update-lgpss\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.337319 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc2415b2-e46f-4d36-8433-ffcc83f63db8-operator-scripts\") pod \"heat-1a7e-account-create-update-lgpss\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.356650 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8r6r\" (UniqueName: \"kubernetes.io/projected/dc2415b2-e46f-4d36-8433-ffcc83f63db8-kube-api-access-p8r6r\") pod \"heat-1a7e-account-create-update-lgpss\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.426892 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.490886 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.598724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c89759895-d7j8d" event={"ID":"1ca26141-3036-4a4c-896d-671c9fc24037","Type":"ContainerStarted","Data":"49dea56220ffc7e19eaf8c8d4cb42657b635eb84e6d18ee26ae4ccac6a3c7fc7"} Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.599042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c89759895-d7j8d" event={"ID":"1ca26141-3036-4a4c-896d-671c9fc24037","Type":"ContainerStarted","Data":"0292e839620bcbb160601660f9cf01126e416e40ca1ecdb44a56f8c583bb0f16"} Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.628014 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c89759895-d7j8d" podStartSLOduration=2.627986065 podStartE2EDuration="2.627986065s" podCreationTimestamp="2025-12-06 09:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:11:25.622725484 +0000 UTC m=+8048.024114354" watchObservedRunningTime="2025-12-06 09:11:25.627986065 +0000 UTC m=+8048.029374935" Dec 06 09:11:25 crc kubenswrapper[4895]: I1206 09:11:25.902515 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-pmnxk"] Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.033789 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1a7e-account-create-update-lgpss"] Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.610492 4895 generic.go:334] "Generic (PLEG): container finished" podID="dc2415b2-e46f-4d36-8433-ffcc83f63db8" containerID="858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4" exitCode=0 Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.610588 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1a7e-account-create-update-lgpss" event={"ID":"dc2415b2-e46f-4d36-8433-ffcc83f63db8","Type":"ContainerDied","Data":"858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4"} Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.610633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1a7e-account-create-update-lgpss" event={"ID":"dc2415b2-e46f-4d36-8433-ffcc83f63db8","Type":"ContainerStarted","Data":"717c36000b387b3941d7da21d052711929fafe224f39ee40767a7bfc8e430772"} Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.612535 4895 generic.go:334] "Generic (PLEG): container finished" podID="dddc29b8-1da4-40de-9bd3-076f4276f53d" containerID="8c8e93cd0d63ba25cade04f9c43799494720338467ee86832b55adfb0d0571ef" exitCode=0 Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.612585 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pmnxk" event={"ID":"dddc29b8-1da4-40de-9bd3-076f4276f53d","Type":"ContainerDied","Data":"8c8e93cd0d63ba25cade04f9c43799494720338467ee86832b55adfb0d0571ef"} Dec 06 09:11:26 crc kubenswrapper[4895]: I1206 09:11:26.612609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pmnxk" event={"ID":"dddc29b8-1da4-40de-9bd3-076f4276f53d","Type":"ContainerStarted","Data":"33a2ba9924bd72b92290d9ab88e1c38c4041c2a8e833920c7b02297f75af53d7"} Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.119921 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.121962 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.206405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc2415b2-e46f-4d36-8433-ffcc83f63db8-operator-scripts\") pod \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.206594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8r6r\" (UniqueName: \"kubernetes.io/projected/dc2415b2-e46f-4d36-8433-ffcc83f63db8-kube-api-access-p8r6r\") pod \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\" (UID: \"dc2415b2-e46f-4d36-8433-ffcc83f63db8\") " Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.206762 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddc29b8-1da4-40de-9bd3-076f4276f53d-operator-scripts\") pod \"dddc29b8-1da4-40de-9bd3-076f4276f53d\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.206809 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqdt4\" (UniqueName: \"kubernetes.io/projected/dddc29b8-1da4-40de-9bd3-076f4276f53d-kube-api-access-zqdt4\") pod \"dddc29b8-1da4-40de-9bd3-076f4276f53d\" (UID: \"dddc29b8-1da4-40de-9bd3-076f4276f53d\") " Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.208285 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddc29b8-1da4-40de-9bd3-076f4276f53d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dddc29b8-1da4-40de-9bd3-076f4276f53d" (UID: "dddc29b8-1da4-40de-9bd3-076f4276f53d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.208659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2415b2-e46f-4d36-8433-ffcc83f63db8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc2415b2-e46f-4d36-8433-ffcc83f63db8" (UID: "dc2415b2-e46f-4d36-8433-ffcc83f63db8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.216740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddc29b8-1da4-40de-9bd3-076f4276f53d-kube-api-access-zqdt4" (OuterVolumeSpecName: "kube-api-access-zqdt4") pod "dddc29b8-1da4-40de-9bd3-076f4276f53d" (UID: "dddc29b8-1da4-40de-9bd3-076f4276f53d"). InnerVolumeSpecName "kube-api-access-zqdt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.231791 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc2415b2-e46f-4d36-8433-ffcc83f63db8-kube-api-access-p8r6r" (OuterVolumeSpecName: "kube-api-access-p8r6r") pod "dc2415b2-e46f-4d36-8433-ffcc83f63db8" (UID: "dc2415b2-e46f-4d36-8433-ffcc83f63db8"). InnerVolumeSpecName "kube-api-access-p8r6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.309052 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddc29b8-1da4-40de-9bd3-076f4276f53d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.309101 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqdt4\" (UniqueName: \"kubernetes.io/projected/dddc29b8-1da4-40de-9bd3-076f4276f53d-kube-api-access-zqdt4\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.309112 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc2415b2-e46f-4d36-8433-ffcc83f63db8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.309121 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8r6r\" (UniqueName: \"kubernetes.io/projected/dc2415b2-e46f-4d36-8433-ffcc83f63db8-kube-api-access-p8r6r\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.629054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1a7e-account-create-update-lgpss" event={"ID":"dc2415b2-e46f-4d36-8433-ffcc83f63db8","Type":"ContainerDied","Data":"717c36000b387b3941d7da21d052711929fafe224f39ee40767a7bfc8e430772"} Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.629093 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="717c36000b387b3941d7da21d052711929fafe224f39ee40767a7bfc8e430772" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.629141 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1a7e-account-create-update-lgpss" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.640593 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pmnxk" event={"ID":"dddc29b8-1da4-40de-9bd3-076f4276f53d","Type":"ContainerDied","Data":"33a2ba9924bd72b92290d9ab88e1c38c4041c2a8e833920c7b02297f75af53d7"} Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.640659 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pmnxk" Dec 06 09:11:28 crc kubenswrapper[4895]: I1206 09:11:28.640658 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a2ba9924bd72b92290d9ab88e1c38c4041c2a8e833920c7b02297f75af53d7" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.175271 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c84d5b777-ndnrf" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.522557 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jdtpw"] Dec 06 09:11:30 crc kubenswrapper[4895]: E1206 09:11:30.523221 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddc29b8-1da4-40de-9bd3-076f4276f53d" containerName="mariadb-database-create" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.523253 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddc29b8-1da4-40de-9bd3-076f4276f53d" containerName="mariadb-database-create" Dec 06 09:11:30 crc kubenswrapper[4895]: E1206 09:11:30.523272 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2415b2-e46f-4d36-8433-ffcc83f63db8" containerName="mariadb-account-create-update" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.523282 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2415b2-e46f-4d36-8433-ffcc83f63db8" containerName="mariadb-account-create-update" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.523584 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2415b2-e46f-4d36-8433-ffcc83f63db8" containerName="mariadb-account-create-update" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.523620 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddc29b8-1da4-40de-9bd3-076f4276f53d" containerName="mariadb-database-create" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.524595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.528343 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.528818 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nc6hb" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.540835 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jdtpw"] Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.556408 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-config-data\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.556454 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-combined-ca-bundle\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.556680 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjqw\" (UniqueName: \"kubernetes.io/projected/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-kube-api-access-hrjqw\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.658509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjqw\" (UniqueName: \"kubernetes.io/projected/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-kube-api-access-hrjqw\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.658688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-config-data\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.658714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-combined-ca-bundle\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.666230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-config-data\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.667404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-combined-ca-bundle\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.681417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjqw\" (UniqueName: \"kubernetes.io/projected/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-kube-api-access-hrjqw\") pod \"heat-db-sync-jdtpw\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:30 crc kubenswrapper[4895]: I1206 09:11:30.851136 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:31 crc kubenswrapper[4895]: I1206 09:11:31.942933 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jdtpw"] Dec 06 09:11:32 crc kubenswrapper[4895]: I1206 09:11:32.701860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdtpw" event={"ID":"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f","Type":"ContainerStarted","Data":"937049ee65e5a83f47d88f5764707f1a4ff0aac8b91d0174764e732077e140ea"} Dec 06 09:11:33 crc kubenswrapper[4895]: I1206 09:11:33.050558 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:11:33 crc kubenswrapper[4895]: E1206 09:11:33.050836 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:11:33 crc kubenswrapper[4895]: I1206 09:11:33.983537 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:33 crc kubenswrapper[4895]: I1206 09:11:33.984423 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:40 crc kubenswrapper[4895]: I1206 09:11:40.174808 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c84d5b777-ndnrf" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Dec 06 09:11:40 crc kubenswrapper[4895]: I1206 09:11:40.175388 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:11:43 crc kubenswrapper[4895]: I1206 09:11:43.830020 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdtpw" event={"ID":"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f","Type":"ContainerStarted","Data":"8faea71d98114bbacff04cbbdb98ed91e42014ef4ff0995d26210a3d61622f7b"} Dec 06 09:11:43 crc kubenswrapper[4895]: I1206 09:11:43.856066 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jdtpw" podStartSLOduration=2.880015401 podStartE2EDuration="13.856036976s" podCreationTimestamp="2025-12-06 09:11:30 +0000 UTC" firstStartedPulling="2025-12-06 09:11:31.964066453 +0000 UTC m=+8054.365455323" lastFinishedPulling="2025-12-06 09:11:42.940088028 +0000 UTC m=+8065.341476898" observedRunningTime="2025-12-06 09:11:43.849349076 +0000 UTC m=+8066.250737986" watchObservedRunningTime="2025-12-06 09:11:43.856036976 +0000 UTC m=+8066.257425856" Dec 06 09:11:43 crc kubenswrapper[4895]: I1206 09:11:43.985012 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c89759895-d7j8d" podUID="1ca26141-3036-4a4c-896d-671c9fc24037" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Dec 06 09:11:44 crc kubenswrapper[4895]: W1206 09:11:44.785777 4895 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc2415b2_e46f_4d36_8433_ffcc83f63db8.slice/crio-conmon-858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc2415b2_e46f_4d36_8433_ffcc83f63db8.slice/crio-conmon-858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4.scope: no such file or directory Dec 06 09:11:44 crc kubenswrapper[4895]: W1206 09:11:44.786173 4895 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc2415b2_e46f_4d36_8433_ffcc83f63db8.slice/crio-858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc2415b2_e46f_4d36_8433_ffcc83f63db8.slice/crio-858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4.scope: no such file or directory Dec 06 09:11:44 crc kubenswrapper[4895]: I1206 09:11:44.851221 4895 generic.go:334] "Generic (PLEG): container finished" podID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerID="db40b8fe12626d3474b39b7fea3311d64f78121aa32ccf41d3544b50efdc0dcc" exitCode=137 Dec 06 09:11:44 crc kubenswrapper[4895]: I1206 09:11:44.852222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c84d5b777-ndnrf" event={"ID":"c53114e3-eac5-40d1-ac02-35bb2f216687","Type":"ContainerDied","Data":"db40b8fe12626d3474b39b7fea3311d64f78121aa32ccf41d3544b50efdc0dcc"} Dec 06 09:11:45 crc kubenswrapper[4895]: E1206 09:11:45.091198 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc53114e3_eac5_40d1_ac02_35bb2f216687.slice/crio-conmon-db40b8fe12626d3474b39b7fea3311d64f78121aa32ccf41d3544b50efdc0dcc.scope\": RecentStats: unable to find data in memory cache]" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.287867 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.397434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c53114e3-eac5-40d1-ac02-35bb2f216687-horizon-secret-key\") pod \"c53114e3-eac5-40d1-ac02-35bb2f216687\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.397540 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-config-data\") pod \"c53114e3-eac5-40d1-ac02-35bb2f216687\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.397701 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53114e3-eac5-40d1-ac02-35bb2f216687-logs\") pod \"c53114e3-eac5-40d1-ac02-35bb2f216687\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.397762 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-scripts\") pod \"c53114e3-eac5-40d1-ac02-35bb2f216687\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.397828 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9rcc\" (UniqueName: \"kubernetes.io/projected/c53114e3-eac5-40d1-ac02-35bb2f216687-kube-api-access-g9rcc\") pod \"c53114e3-eac5-40d1-ac02-35bb2f216687\" (UID: \"c53114e3-eac5-40d1-ac02-35bb2f216687\") " Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.398871 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53114e3-eac5-40d1-ac02-35bb2f216687-logs" (OuterVolumeSpecName: "logs") pod "c53114e3-eac5-40d1-ac02-35bb2f216687" (UID: "c53114e3-eac5-40d1-ac02-35bb2f216687"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.407752 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53114e3-eac5-40d1-ac02-35bb2f216687-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c53114e3-eac5-40d1-ac02-35bb2f216687" (UID: "c53114e3-eac5-40d1-ac02-35bb2f216687"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.407897 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53114e3-eac5-40d1-ac02-35bb2f216687-kube-api-access-g9rcc" (OuterVolumeSpecName: "kube-api-access-g9rcc") pod "c53114e3-eac5-40d1-ac02-35bb2f216687" (UID: "c53114e3-eac5-40d1-ac02-35bb2f216687"). InnerVolumeSpecName "kube-api-access-g9rcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.426853 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-scripts" (OuterVolumeSpecName: "scripts") pod "c53114e3-eac5-40d1-ac02-35bb2f216687" (UID: "c53114e3-eac5-40d1-ac02-35bb2f216687"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.431404 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-config-data" (OuterVolumeSpecName: "config-data") pod "c53114e3-eac5-40d1-ac02-35bb2f216687" (UID: "c53114e3-eac5-40d1-ac02-35bb2f216687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.499786 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53114e3-eac5-40d1-ac02-35bb2f216687-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.499820 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.499833 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9rcc\" (UniqueName: \"kubernetes.io/projected/c53114e3-eac5-40d1-ac02-35bb2f216687-kube-api-access-g9rcc\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.499844 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c53114e3-eac5-40d1-ac02-35bb2f216687-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.499857 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c53114e3-eac5-40d1-ac02-35bb2f216687-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.867815 4895 generic.go:334] "Generic (PLEG): container finished" podID="7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" containerID="8faea71d98114bbacff04cbbdb98ed91e42014ef4ff0995d26210a3d61622f7b" exitCode=0 Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.867928 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdtpw" event={"ID":"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f","Type":"ContainerDied","Data":"8faea71d98114bbacff04cbbdb98ed91e42014ef4ff0995d26210a3d61622f7b"} Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.873643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c84d5b777-ndnrf" event={"ID":"c53114e3-eac5-40d1-ac02-35bb2f216687","Type":"ContainerDied","Data":"ad04ceed1f1c1659b43821529dee97e428e9868ead7508d414a3770c5d397132"} Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.873707 4895 scope.go:117] "RemoveContainer" containerID="707815af6857c5edcdccbd514c03ea2a1818ad95742d1d3456c718828e4d8b2c" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.873755 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c84d5b777-ndnrf" Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.928976 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c84d5b777-ndnrf"] Dec 06 09:11:45 crc kubenswrapper[4895]: I1206 09:11:45.938901 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c84d5b777-ndnrf"] Dec 06 09:11:46 crc kubenswrapper[4895]: I1206 09:11:46.037014 4895 scope.go:117] "RemoveContainer" containerID="db40b8fe12626d3474b39b7fea3311d64f78121aa32ccf41d3544b50efdc0dcc" Dec 06 09:11:46 crc kubenswrapper[4895]: I1206 09:11:46.061043 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" path="/var/lib/kubelet/pods/c53114e3-eac5-40d1-ac02-35bb2f216687/volumes" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.281243 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.342192 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-combined-ca-bundle\") pod \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.342413 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrjqw\" (UniqueName: \"kubernetes.io/projected/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-kube-api-access-hrjqw\") pod \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.344037 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-config-data\") pod \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\" (UID: \"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f\") " Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.351054 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-kube-api-access-hrjqw" (OuterVolumeSpecName: "kube-api-access-hrjqw") pod "7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" (UID: "7eaf0992-ddab-4e55-b3ad-c5b4da3c068f"). InnerVolumeSpecName "kube-api-access-hrjqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.377016 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" (UID: "7eaf0992-ddab-4e55-b3ad-c5b4da3c068f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.445980 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.446020 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrjqw\" (UniqueName: \"kubernetes.io/projected/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-kube-api-access-hrjqw\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.461436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-config-data" (OuterVolumeSpecName: "config-data") pod "7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" (UID: "7eaf0992-ddab-4e55-b3ad-c5b4da3c068f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.548090 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.925972 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdtpw" event={"ID":"7eaf0992-ddab-4e55-b3ad-c5b4da3c068f","Type":"ContainerDied","Data":"937049ee65e5a83f47d88f5764707f1a4ff0aac8b91d0174764e732077e140ea"} Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.926019 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="937049ee65e5a83f47d88f5764707f1a4ff0aac8b91d0174764e732077e140ea" Dec 06 09:11:47 crc kubenswrapper[4895]: I1206 09:11:47.926067 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdtpw" Dec 06 09:11:48 crc kubenswrapper[4895]: I1206 09:11:48.059093 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:11:48 crc kubenswrapper[4895]: E1206 09:11:48.059529 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.076893 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d989c6c78-ls6jf"] Dec 06 09:11:49 crc kubenswrapper[4895]: E1206 09:11:49.077679 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" containerName="heat-db-sync" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.077698 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" containerName="heat-db-sync" Dec 06 09:11:49 crc kubenswrapper[4895]: E1206 09:11:49.077719 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.077727 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" Dec 06 09:11:49 crc kubenswrapper[4895]: E1206 09:11:49.077746 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon-log" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.077755 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon-log" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.077990 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon-log" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.078019 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53114e3-eac5-40d1-ac02-35bb2f216687" containerName="horizon" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.078038 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" containerName="heat-db-sync" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.078747 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.082606 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nc6hb" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.082805 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.082957 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.162438 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d989c6c78-ls6jf"] Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.180551 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-config-data-custom\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.180677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-combined-ca-bundle\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.180714 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-config-data\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.180766 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mfx\" (UniqueName: \"kubernetes.io/projected/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-kube-api-access-l8mfx\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.282658 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-config-data-custom\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.282765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-combined-ca-bundle\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.282792 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-config-data\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.282829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mfx\" (UniqueName: \"kubernetes.io/projected/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-kube-api-access-l8mfx\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.288595 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-config-data\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.291339 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-combined-ca-bundle\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.306147 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-config-data-custom\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.309388 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mfx\" (UniqueName: \"kubernetes.io/projected/7b9ac127-3b4b-4c4e-a6be-57921f06f84b-kube-api-access-l8mfx\") pod \"heat-engine-d989c6c78-ls6jf\" (UID: \"7b9ac127-3b4b-4c4e-a6be-57921f06f84b\") " pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.334591 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-878599567-v42s8"] Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.336269 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.343778 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.357528 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78fbc99ff7-r9c56"] Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.358894 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.361597 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.371083 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-878599567-v42s8"] Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.381893 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78fbc99ff7-r9c56"] Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.384531 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-config-data\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.384602 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tww4n\" (UniqueName: \"kubernetes.io/projected/32182461-265a-44a6-8003-c2bb7786b8a1-kube-api-access-tww4n\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.384747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-combined-ca-bundle\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.384781 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-combined-ca-bundle\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.384839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjgb\" (UniqueName: \"kubernetes.io/projected/050c3eed-05e1-4aa1-b309-7ac6b68389e8-kube-api-access-hjjgb\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.384961 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-config-data\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.385000 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-config-data-custom\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.385078 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-config-data-custom\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.400737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-config-data\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486446 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tww4n\" (UniqueName: \"kubernetes.io/projected/32182461-265a-44a6-8003-c2bb7786b8a1-kube-api-access-tww4n\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-combined-ca-bundle\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486510 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-combined-ca-bundle\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486535 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjgb\" (UniqueName: \"kubernetes.io/projected/050c3eed-05e1-4aa1-b309-7ac6b68389e8-kube-api-access-hjjgb\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486560 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-config-data\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-config-data-custom\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.486626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-config-data-custom\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.492709 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-config-data\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.494607 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-config-data-custom\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.494671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32182461-265a-44a6-8003-c2bb7786b8a1-combined-ca-bundle\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.495166 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-combined-ca-bundle\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.496749 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-config-data\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.500822 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c3eed-05e1-4aa1-b309-7ac6b68389e8-config-data-custom\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.510271 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjgb\" (UniqueName: \"kubernetes.io/projected/050c3eed-05e1-4aa1-b309-7ac6b68389e8-kube-api-access-hjjgb\") pod \"heat-cfnapi-878599567-v42s8\" (UID: \"050c3eed-05e1-4aa1-b309-7ac6b68389e8\") " pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.514315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tww4n\" (UniqueName: \"kubernetes.io/projected/32182461-265a-44a6-8003-c2bb7786b8a1-kube-api-access-tww4n\") pod \"heat-api-78fbc99ff7-r9c56\" (UID: \"32182461-265a-44a6-8003-c2bb7786b8a1\") " pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.690287 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.698026 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.864203 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d989c6c78-ls6jf"] Dec 06 09:11:49 crc kubenswrapper[4895]: I1206 09:11:49.951943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d989c6c78-ls6jf" event={"ID":"7b9ac127-3b4b-4c4e-a6be-57921f06f84b","Type":"ContainerStarted","Data":"4955c73e40441922a62eaf77a501d568f8b6174e4083cf7b1b778a84fdfe51c1"} Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.209897 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-878599567-v42s8"] Dec 06 09:11:50 crc kubenswrapper[4895]: W1206 09:11:50.211798 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050c3eed_05e1_4aa1_b309_7ac6b68389e8.slice/crio-6842b57793e7f034b9e55e7579a6d7b3a627fa93606ab88ca2562ffbbc35a4a3 WatchSource:0}: Error finding container 6842b57793e7f034b9e55e7579a6d7b3a627fa93606ab88ca2562ffbbc35a4a3: Status 404 returned error can't find the container with id 6842b57793e7f034b9e55e7579a6d7b3a627fa93606ab88ca2562ffbbc35a4a3 Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.326977 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78fbc99ff7-r9c56"] Dec 06 09:11:50 crc kubenswrapper[4895]: W1206 09:11:50.327442 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32182461_265a_44a6_8003_c2bb7786b8a1.slice/crio-2036f13e1e069a75f2767449042a9fdf04306e083b4fd3ae58bab0345257e031 WatchSource:0}: Error finding container 2036f13e1e069a75f2767449042a9fdf04306e083b4fd3ae58bab0345257e031: Status 404 returned error can't find the container with id 2036f13e1e069a75f2767449042a9fdf04306e083b4fd3ae58bab0345257e031 Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.974681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-878599567-v42s8" event={"ID":"050c3eed-05e1-4aa1-b309-7ac6b68389e8","Type":"ContainerStarted","Data":"6842b57793e7f034b9e55e7579a6d7b3a627fa93606ab88ca2562ffbbc35a4a3"} Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.978556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d989c6c78-ls6jf" event={"ID":"7b9ac127-3b4b-4c4e-a6be-57921f06f84b","Type":"ContainerStarted","Data":"1dbe2d089955436b4f43596bc8811bddb12ee43164149fbcbdec0c990ebd3b3b"} Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.978765 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.982910 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78fbc99ff7-r9c56" event={"ID":"32182461-265a-44a6-8003-c2bb7786b8a1","Type":"ContainerStarted","Data":"2036f13e1e069a75f2767449042a9fdf04306e083b4fd3ae58bab0345257e031"} Dec 06 09:11:50 crc kubenswrapper[4895]: I1206 09:11:50.994669 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d989c6c78-ls6jf" podStartSLOduration=1.994652605 podStartE2EDuration="1.994652605s" podCreationTimestamp="2025-12-06 09:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:11:50.99262279 +0000 UTC m=+8073.394011670" watchObservedRunningTime="2025-12-06 09:11:50.994652605 +0000 UTC m=+8073.396041475" Dec 06 09:11:53 crc kubenswrapper[4895]: I1206 09:11:53.011733 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78fbc99ff7-r9c56" event={"ID":"32182461-265a-44a6-8003-c2bb7786b8a1","Type":"ContainerStarted","Data":"7f8b24bb40bfb3c0b3dc8966ec80f71fc04ad01dcc9166e99e65c1700e999b44"} Dec 06 09:11:53 crc kubenswrapper[4895]: I1206 09:11:53.012328 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:11:53 crc kubenswrapper[4895]: I1206 09:11:53.013351 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-878599567-v42s8" event={"ID":"050c3eed-05e1-4aa1-b309-7ac6b68389e8","Type":"ContainerStarted","Data":"e13d3f20f5efd5138f6e473b7980e16c7b32c200fb41dc1050c556cd5912b9ae"} Dec 06 09:11:53 crc kubenswrapper[4895]: I1206 09:11:53.013573 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:11:53 crc kubenswrapper[4895]: I1206 09:11:53.030159 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78fbc99ff7-r9c56" podStartSLOduration=2.173640724 podStartE2EDuration="4.030142421s" podCreationTimestamp="2025-12-06 09:11:49 +0000 UTC" firstStartedPulling="2025-12-06 09:11:50.330364518 +0000 UTC m=+8072.731753378" lastFinishedPulling="2025-12-06 09:11:52.186866185 +0000 UTC m=+8074.588255075" observedRunningTime="2025-12-06 09:11:53.02602293 +0000 UTC m=+8075.427411800" watchObservedRunningTime="2025-12-06 09:11:53.030142421 +0000 UTC m=+8075.431531281" Dec 06 09:11:53 crc kubenswrapper[4895]: I1206 09:11:53.052313 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-878599567-v42s8" podStartSLOduration=2.084806977 podStartE2EDuration="4.052291336s" podCreationTimestamp="2025-12-06 09:11:49 +0000 UTC" firstStartedPulling="2025-12-06 09:11:50.215384049 +0000 UTC m=+8072.616772919" lastFinishedPulling="2025-12-06 09:11:52.182868408 +0000 UTC m=+8074.584257278" observedRunningTime="2025-12-06 09:11:53.046647185 +0000 UTC m=+8075.448036055" watchObservedRunningTime="2025-12-06 09:11:53.052291336 +0000 UTC m=+8075.453680206" Dec 06 09:11:56 crc kubenswrapper[4895]: I1206 09:11:56.254673 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:57 crc kubenswrapper[4895]: I1206 09:11:57.974317 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c89759895-d7j8d" Dec 06 09:11:58 crc kubenswrapper[4895]: I1206 09:11:58.066814 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57b6458b8f-mj9hj"] Dec 06 09:11:58 crc kubenswrapper[4895]: I1206 09:11:58.067105 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57b6458b8f-mj9hj" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon-log" containerID="cri-o://2c1e2855057785f4adb79b59abc1e8177e81c638b3c7826cb7b13626674bfe16" gracePeriod=30 Dec 06 09:11:58 crc kubenswrapper[4895]: I1206 09:11:58.067247 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57b6458b8f-mj9hj" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" containerID="cri-o://cca43bf0d3e36cab245952ebee5fa85a6d032cc75c467d766338ca9bf240ed77" gracePeriod=30 Dec 06 09:12:00 crc kubenswrapper[4895]: I1206 09:12:00.050884 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:12:00 crc kubenswrapper[4895]: E1206 09:12:00.051218 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:12:01 crc kubenswrapper[4895]: I1206 09:12:01.060329 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78fbc99ff7-r9c56" Dec 06 09:12:01 crc kubenswrapper[4895]: I1206 09:12:01.158959 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-878599567-v42s8" Dec 06 09:12:01 crc kubenswrapper[4895]: I1206 09:12:01.486586 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-57b6458b8f-mj9hj" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:45930->10.217.1.112:8080: read: connection reset by peer" Dec 06 09:12:02 crc kubenswrapper[4895]: I1206 09:12:02.107528 4895 generic.go:334] "Generic (PLEG): container finished" podID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerID="cca43bf0d3e36cab245952ebee5fa85a6d032cc75c467d766338ca9bf240ed77" exitCode=0 Dec 06 09:12:02 crc kubenswrapper[4895]: I1206 09:12:02.107575 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b6458b8f-mj9hj" event={"ID":"e9e07b07-5f94-4787-88fb-2daf876f572c","Type":"ContainerDied","Data":"cca43bf0d3e36cab245952ebee5fa85a6d032cc75c467d766338ca9bf240ed77"} Dec 06 09:12:09 crc kubenswrapper[4895]: I1206 09:12:09.442725 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d989c6c78-ls6jf" Dec 06 09:12:10 crc kubenswrapper[4895]: I1206 09:12:10.847144 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-57b6458b8f-mj9hj" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 06 09:12:12 crc kubenswrapper[4895]: I1206 09:12:12.045056 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sqzjf"] Dec 06 09:12:12 crc kubenswrapper[4895]: I1206 09:12:12.062264 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0bd3-account-create-update-cngw7"] Dec 06 09:12:12 crc kubenswrapper[4895]: I1206 09:12:12.077171 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0bd3-account-create-update-cngw7"] Dec 06 09:12:12 crc kubenswrapper[4895]: I1206 09:12:12.088511 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sqzjf"] Dec 06 09:12:14 crc kubenswrapper[4895]: I1206 09:12:14.062575 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5060d9e6-9916-4fe2-a34d-63c9c4e21f9e" path="/var/lib/kubelet/pods/5060d9e6-9916-4fe2-a34d-63c9c4e21f9e/volumes" Dec 06 09:12:14 crc kubenswrapper[4895]: I1206 09:12:14.063301 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eaadb1f-b708-4d09-8b99-d5ca878579ce" path="/var/lib/kubelet/pods/9eaadb1f-b708-4d09-8b99-d5ca878579ce/volumes" Dec 06 09:12:15 crc kubenswrapper[4895]: I1206 09:12:15.052161 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:12:15 crc kubenswrapper[4895]: E1206 09:12:15.053191 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:12:17 crc kubenswrapper[4895]: I1206 09:12:17.197016 4895 scope.go:117] "RemoveContainer" containerID="72b57c8d198d858ed107418aa3c02ac1fc85c8871c55781a74c0e7be87dab263" Dec 06 09:12:17 crc kubenswrapper[4895]: I1206 09:12:17.260667 4895 scope.go:117] "RemoveContainer" containerID="c0ddd84e7a74098af020d1c96a4557d49c9f216d84d3fdaa132edcef2494a512" Dec 06 09:12:17 crc kubenswrapper[4895]: I1206 09:12:17.312259 4895 scope.go:117] "RemoveContainer" containerID="4d7d71eb7bdd226b39e25e4d2f213c727938957b8b4e99120105b3c76e0e7f2f" Dec 06 09:12:17 crc kubenswrapper[4895]: I1206 09:12:17.345950 4895 scope.go:117] "RemoveContainer" containerID="44e7a66b43ca519e7b7dbe47399fdbeea08ae232a1aac81419dde5d48e60c6f5" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.148030 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf"] Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.150225 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.157158 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.157168 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf"] Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.220526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxjt\" (UniqueName: \"kubernetes.io/projected/53a6fc52-024b-4b34-9bdb-da4207dd83d6-kube-api-access-qbxjt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.221628 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.221941 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.323816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.323977 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbxjt\" (UniqueName: \"kubernetes.io/projected/53a6fc52-024b-4b34-9bdb-da4207dd83d6-kube-api-access-qbxjt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.324023 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.324562 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.324435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.355545 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbxjt\" (UniqueName: \"kubernetes.io/projected/53a6fc52-024b-4b34-9bdb-da4207dd83d6-kube-api-access-qbxjt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:18 crc kubenswrapper[4895]: I1206 09:12:18.480344 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:19 crc kubenswrapper[4895]: I1206 09:12:19.107626 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf"] Dec 06 09:12:19 crc kubenswrapper[4895]: W1206 09:12:19.109746 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a6fc52_024b_4b34_9bdb_da4207dd83d6.slice/crio-7d65b31388a9d7f03b651691fcf26c54162071fa695f512dc560da329359ecf3 WatchSource:0}: Error finding container 7d65b31388a9d7f03b651691fcf26c54162071fa695f512dc560da329359ecf3: Status 404 returned error can't find the container with id 7d65b31388a9d7f03b651691fcf26c54162071fa695f512dc560da329359ecf3 Dec 06 09:12:19 crc kubenswrapper[4895]: I1206 09:12:19.304695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" event={"ID":"53a6fc52-024b-4b34-9bdb-da4207dd83d6","Type":"ContainerStarted","Data":"5fcf8a03f6341224c36d92d2460e833eb5a1dba182ae58e8bbf432a5d7a8ae92"} Dec 06 09:12:19 crc kubenswrapper[4895]: I1206 09:12:19.304744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" event={"ID":"53a6fc52-024b-4b34-9bdb-da4207dd83d6","Type":"ContainerStarted","Data":"7d65b31388a9d7f03b651691fcf26c54162071fa695f512dc560da329359ecf3"} Dec 06 09:12:20 crc kubenswrapper[4895]: I1206 09:12:20.319062 4895 generic.go:334] "Generic (PLEG): container finished" podID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerID="5fcf8a03f6341224c36d92d2460e833eb5a1dba182ae58e8bbf432a5d7a8ae92" exitCode=0 Dec 06 09:12:20 crc kubenswrapper[4895]: I1206 09:12:20.319206 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" event={"ID":"53a6fc52-024b-4b34-9bdb-da4207dd83d6","Type":"ContainerDied","Data":"5fcf8a03f6341224c36d92d2460e833eb5a1dba182ae58e8bbf432a5d7a8ae92"} Dec 06 09:12:20 crc kubenswrapper[4895]: I1206 09:12:20.847615 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-57b6458b8f-mj9hj" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 06 09:12:20 crc kubenswrapper[4895]: I1206 09:12:20.847839 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:12:22 crc kubenswrapper[4895]: I1206 09:12:22.029259 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h7rnz"] Dec 06 09:12:22 crc kubenswrapper[4895]: I1206 09:12:22.036334 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h7rnz"] Dec 06 09:12:22 crc kubenswrapper[4895]: I1206 09:12:22.067899 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af" path="/var/lib/kubelet/pods/3e9ed8b7-c8f7-4f57-952a-1ebccb0db6af/volumes" Dec 06 09:12:22 crc kubenswrapper[4895]: I1206 09:12:22.357914 4895 generic.go:334] "Generic (PLEG): container finished" podID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerID="d57d362f6bbb7d7a6d1f3ebc4be8be7fa2dfe82d3acbec004ad28a5860e23547" exitCode=0 Dec 06 09:12:22 crc kubenswrapper[4895]: I1206 09:12:22.357975 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" event={"ID":"53a6fc52-024b-4b34-9bdb-da4207dd83d6","Type":"ContainerDied","Data":"d57d362f6bbb7d7a6d1f3ebc4be8be7fa2dfe82d3acbec004ad28a5860e23547"} Dec 06 09:12:23 crc kubenswrapper[4895]: I1206 09:12:23.371001 4895 generic.go:334] "Generic (PLEG): container finished" podID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerID="f87f579dc5c8a86497650925c0f8b74680b75d7f0a4eb9200bbca499966e7c7e" exitCode=0 Dec 06 09:12:23 crc kubenswrapper[4895]: I1206 09:12:23.371105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" event={"ID":"53a6fc52-024b-4b34-9bdb-da4207dd83d6","Type":"ContainerDied","Data":"f87f579dc5c8a86497650925c0f8b74680b75d7f0a4eb9200bbca499966e7c7e"} Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.818026 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.948245 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbxjt\" (UniqueName: \"kubernetes.io/projected/53a6fc52-024b-4b34-9bdb-da4207dd83d6-kube-api-access-qbxjt\") pod \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.948356 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-util\") pod \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.948514 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-bundle\") pod \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\" (UID: \"53a6fc52-024b-4b34-9bdb-da4207dd83d6\") " Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.950246 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-bundle" (OuterVolumeSpecName: "bundle") pod "53a6fc52-024b-4b34-9bdb-da4207dd83d6" (UID: "53a6fc52-024b-4b34-9bdb-da4207dd83d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.960812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a6fc52-024b-4b34-9bdb-da4207dd83d6-kube-api-access-qbxjt" (OuterVolumeSpecName: "kube-api-access-qbxjt") pod "53a6fc52-024b-4b34-9bdb-da4207dd83d6" (UID: "53a6fc52-024b-4b34-9bdb-da4207dd83d6"). InnerVolumeSpecName "kube-api-access-qbxjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:12:24 crc kubenswrapper[4895]: I1206 09:12:24.963576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-util" (OuterVolumeSpecName: "util") pod "53a6fc52-024b-4b34-9bdb-da4207dd83d6" (UID: "53a6fc52-024b-4b34-9bdb-da4207dd83d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:12:25 crc kubenswrapper[4895]: I1206 09:12:25.050895 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbxjt\" (UniqueName: \"kubernetes.io/projected/53a6fc52-024b-4b34-9bdb-da4207dd83d6-kube-api-access-qbxjt\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:25 crc kubenswrapper[4895]: I1206 09:12:25.050933 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-util\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:25 crc kubenswrapper[4895]: I1206 09:12:25.050943 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a6fc52-024b-4b34-9bdb-da4207dd83d6-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:25 crc kubenswrapper[4895]: I1206 09:12:25.396181 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" event={"ID":"53a6fc52-024b-4b34-9bdb-da4207dd83d6","Type":"ContainerDied","Data":"7d65b31388a9d7f03b651691fcf26c54162071fa695f512dc560da329359ecf3"} Dec 06 09:12:25 crc kubenswrapper[4895]: I1206 09:12:25.396228 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf" Dec 06 09:12:25 crc kubenswrapper[4895]: I1206 09:12:25.396240 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d65b31388a9d7f03b651691fcf26c54162071fa695f512dc560da329359ecf3" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.436398 4895 generic.go:334] "Generic (PLEG): container finished" podID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerID="2c1e2855057785f4adb79b59abc1e8177e81c638b3c7826cb7b13626674bfe16" exitCode=137 Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.436903 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b6458b8f-mj9hj" event={"ID":"e9e07b07-5f94-4787-88fb-2daf876f572c","Type":"ContainerDied","Data":"2c1e2855057785f4adb79b59abc1e8177e81c638b3c7826cb7b13626674bfe16"} Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.572775 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.729763 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-config-data\") pod \"e9e07b07-5f94-4787-88fb-2daf876f572c\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.729901 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9l2s\" (UniqueName: \"kubernetes.io/projected/e9e07b07-5f94-4787-88fb-2daf876f572c-kube-api-access-p9l2s\") pod \"e9e07b07-5f94-4787-88fb-2daf876f572c\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.729925 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9e07b07-5f94-4787-88fb-2daf876f572c-horizon-secret-key\") pod \"e9e07b07-5f94-4787-88fb-2daf876f572c\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.729981 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-scripts\") pod \"e9e07b07-5f94-4787-88fb-2daf876f572c\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.730174 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e07b07-5f94-4787-88fb-2daf876f572c-logs\") pod \"e9e07b07-5f94-4787-88fb-2daf876f572c\" (UID: \"e9e07b07-5f94-4787-88fb-2daf876f572c\") " Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.730838 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e07b07-5f94-4787-88fb-2daf876f572c-logs" (OuterVolumeSpecName: "logs") pod "e9e07b07-5f94-4787-88fb-2daf876f572c" (UID: "e9e07b07-5f94-4787-88fb-2daf876f572c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.735092 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e07b07-5f94-4787-88fb-2daf876f572c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e9e07b07-5f94-4787-88fb-2daf876f572c" (UID: "e9e07b07-5f94-4787-88fb-2daf876f572c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.735408 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e07b07-5f94-4787-88fb-2daf876f572c-kube-api-access-p9l2s" (OuterVolumeSpecName: "kube-api-access-p9l2s") pod "e9e07b07-5f94-4787-88fb-2daf876f572c" (UID: "e9e07b07-5f94-4787-88fb-2daf876f572c"). InnerVolumeSpecName "kube-api-access-p9l2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.754600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-config-data" (OuterVolumeSpecName: "config-data") pod "e9e07b07-5f94-4787-88fb-2daf876f572c" (UID: "e9e07b07-5f94-4787-88fb-2daf876f572c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.769444 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-scripts" (OuterVolumeSpecName: "scripts") pod "e9e07b07-5f94-4787-88fb-2daf876f572c" (UID: "e9e07b07-5f94-4787-88fb-2daf876f572c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.832363 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9l2s\" (UniqueName: \"kubernetes.io/projected/e9e07b07-5f94-4787-88fb-2daf876f572c-kube-api-access-p9l2s\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.832410 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9e07b07-5f94-4787-88fb-2daf876f572c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.832427 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.832442 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9e07b07-5f94-4787-88fb-2daf876f572c-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:28 crc kubenswrapper[4895]: I1206 09:12:28.832458 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9e07b07-5f94-4787-88fb-2daf876f572c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.051649 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:12:29 crc kubenswrapper[4895]: E1206 09:12:29.052124 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.454854 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b6458b8f-mj9hj" event={"ID":"e9e07b07-5f94-4787-88fb-2daf876f572c","Type":"ContainerDied","Data":"fe21735b8543cf53f899d42bf7f92e032b7c08f8efebf3a838d18d393703a188"} Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.454918 4895 scope.go:117] "RemoveContainer" containerID="cca43bf0d3e36cab245952ebee5fa85a6d032cc75c467d766338ca9bf240ed77" Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.455008 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b6458b8f-mj9hj" Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.515644 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57b6458b8f-mj9hj"] Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.524688 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57b6458b8f-mj9hj"] Dec 06 09:12:29 crc kubenswrapper[4895]: I1206 09:12:29.658815 4895 scope.go:117] "RemoveContainer" containerID="2c1e2855057785f4adb79b59abc1e8177e81c638b3c7826cb7b13626674bfe16" Dec 06 09:12:30 crc kubenswrapper[4895]: I1206 09:12:30.065764 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" path="/var/lib/kubelet/pods/e9e07b07-5f94-4787-88fb-2daf876f572c/volumes" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.079918 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz"] Dec 06 09:12:36 crc kubenswrapper[4895]: E1206 09:12:36.080915 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="extract" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.080932 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="extract" Dec 06 09:12:36 crc kubenswrapper[4895]: E1206 09:12:36.080955 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.080961 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" Dec 06 09:12:36 crc kubenswrapper[4895]: E1206 09:12:36.080972 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon-log" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.080978 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon-log" Dec 06 09:12:36 crc kubenswrapper[4895]: E1206 09:12:36.080990 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="pull" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.080997 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="pull" Dec 06 09:12:36 crc kubenswrapper[4895]: E1206 09:12:36.081013 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="util" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.081021 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="util" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.081225 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon-log" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.081249 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e07b07-5f94-4787-88fb-2daf876f572c" containerName="horizon" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.081260 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a6fc52-024b-4b34-9bdb-da4207dd83d6" containerName="extract" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.083459 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.085457 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-fkgml" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.086952 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.087306 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.092040 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.180074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rvs\" (UniqueName: \"kubernetes.io/projected/6a06d981-e38f-4b27-b597-271076759c4b-kube-api-access-r7rvs\") pod \"obo-prometheus-operator-668cf9dfbb-8ffvz\" (UID: \"6a06d981-e38f-4b27-b597-271076759c4b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.214217 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.215673 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.218762 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.231954 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.232005 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.233196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.233937 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4cdf8" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.251826 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.281777 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/542a8fcd-d1af-493c-ae31-c370a4f5d38c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-z5j98\" (UID: \"542a8fcd-d1af-493c-ae31-c370a4f5d38c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.282005 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d8afee3-d9f0-42ac-b2e9-89472dfec610-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-6szsx\" (UID: \"7d8afee3-d9f0-42ac-b2e9-89472dfec610\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.282048 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/542a8fcd-d1af-493c-ae31-c370a4f5d38c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-z5j98\" (UID: \"542a8fcd-d1af-493c-ae31-c370a4f5d38c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.282128 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d8afee3-d9f0-42ac-b2e9-89472dfec610-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-6szsx\" (UID: \"7d8afee3-d9f0-42ac-b2e9-89472dfec610\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.282272 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7rvs\" (UniqueName: \"kubernetes.io/projected/6a06d981-e38f-4b27-b597-271076759c4b-kube-api-access-r7rvs\") pod \"obo-prometheus-operator-668cf9dfbb-8ffvz\" (UID: \"6a06d981-e38f-4b27-b597-271076759c4b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.331412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7rvs\" (UniqueName: \"kubernetes.io/projected/6a06d981-e38f-4b27-b597-271076759c4b-kube-api-access-r7rvs\") pod \"obo-prometheus-operator-668cf9dfbb-8ffvz\" (UID: \"6a06d981-e38f-4b27-b597-271076759c4b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.383904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/542a8fcd-d1af-493c-ae31-c370a4f5d38c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-z5j98\" (UID: \"542a8fcd-d1af-493c-ae31-c370a4f5d38c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.384273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d8afee3-d9f0-42ac-b2e9-89472dfec610-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-6szsx\" (UID: \"7d8afee3-d9f0-42ac-b2e9-89472dfec610\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.384309 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/542a8fcd-d1af-493c-ae31-c370a4f5d38c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-z5j98\" (UID: \"542a8fcd-d1af-493c-ae31-c370a4f5d38c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.384351 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d8afee3-d9f0-42ac-b2e9-89472dfec610-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-6szsx\" (UID: \"7d8afee3-d9f0-42ac-b2e9-89472dfec610\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.389123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d8afee3-d9f0-42ac-b2e9-89472dfec610-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-6szsx\" (UID: \"7d8afee3-d9f0-42ac-b2e9-89472dfec610\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.391437 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/542a8fcd-d1af-493c-ae31-c370a4f5d38c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-z5j98\" (UID: \"542a8fcd-d1af-493c-ae31-c370a4f5d38c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.394105 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d8afee3-d9f0-42ac-b2e9-89472dfec610-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-6szsx\" (UID: \"7d8afee3-d9f0-42ac-b2e9-89472dfec610\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.398289 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/542a8fcd-d1af-493c-ae31-c370a4f5d38c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7789df6f88-z5j98\" (UID: \"542a8fcd-d1af-493c-ae31-c370a4f5d38c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.406974 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.438369 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-fmg6b"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.441596 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.444052 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-mtqpr" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.444289 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.481496 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-fmg6b"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.488650 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3cdf03d-0a52-43df-a589-7312ba4056ed-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-fmg6b\" (UID: \"b3cdf03d-0a52-43df-a589-7312ba4056ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.488822 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjntj\" (UniqueName: \"kubernetes.io/projected/b3cdf03d-0a52-43df-a589-7312ba4056ed-kube-api-access-rjntj\") pod \"observability-operator-d8bb48f5d-fmg6b\" (UID: \"b3cdf03d-0a52-43df-a589-7312ba4056ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.554344 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.570004 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.591608 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3cdf03d-0a52-43df-a589-7312ba4056ed-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-fmg6b\" (UID: \"b3cdf03d-0a52-43df-a589-7312ba4056ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.598682 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjntj\" (UniqueName: \"kubernetes.io/projected/b3cdf03d-0a52-43df-a589-7312ba4056ed-kube-api-access-rjntj\") pod \"observability-operator-d8bb48f5d-fmg6b\" (UID: \"b3cdf03d-0a52-43df-a589-7312ba4056ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.656634 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3cdf03d-0a52-43df-a589-7312ba4056ed-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-fmg6b\" (UID: \"b3cdf03d-0a52-43df-a589-7312ba4056ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.660283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjntj\" (UniqueName: \"kubernetes.io/projected/b3cdf03d-0a52-43df-a589-7312ba4056ed-kube-api-access-rjntj\") pod \"observability-operator-d8bb48f5d-fmg6b\" (UID: \"b3cdf03d-0a52-43df-a589-7312ba4056ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.665735 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-btk6w"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.667027 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.670378 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-f6mhd" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.682714 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-btk6w"] Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.703308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c9b65d4-80d1-423e-8a9f-0786e18d0b00-openshift-service-ca\") pod \"perses-operator-5446b9c989-btk6w\" (UID: \"7c9b65d4-80d1-423e-8a9f-0786e18d0b00\") " pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.703396 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txx2n\" (UniqueName: \"kubernetes.io/projected/7c9b65d4-80d1-423e-8a9f-0786e18d0b00-kube-api-access-txx2n\") pod \"perses-operator-5446b9c989-btk6w\" (UID: \"7c9b65d4-80d1-423e-8a9f-0786e18d0b00\") " pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.805691 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c9b65d4-80d1-423e-8a9f-0786e18d0b00-openshift-service-ca\") pod \"perses-operator-5446b9c989-btk6w\" (UID: \"7c9b65d4-80d1-423e-8a9f-0786e18d0b00\") " pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.805749 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txx2n\" (UniqueName: \"kubernetes.io/projected/7c9b65d4-80d1-423e-8a9f-0786e18d0b00-kube-api-access-txx2n\") pod \"perses-operator-5446b9c989-btk6w\" (UID: \"7c9b65d4-80d1-423e-8a9f-0786e18d0b00\") " pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.808780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c9b65d4-80d1-423e-8a9f-0786e18d0b00-openshift-service-ca\") pod \"perses-operator-5446b9c989-btk6w\" (UID: \"7c9b65d4-80d1-423e-8a9f-0786e18d0b00\") " pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.828403 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txx2n\" (UniqueName: \"kubernetes.io/projected/7c9b65d4-80d1-423e-8a9f-0786e18d0b00-kube-api-access-txx2n\") pod \"perses-operator-5446b9c989-btk6w\" (UID: \"7c9b65d4-80d1-423e-8a9f-0786e18d0b00\") " pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.881512 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:36 crc kubenswrapper[4895]: I1206 09:12:36.994212 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.031937 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz"] Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.071084 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.229955 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98"] Dec 06 09:12:37 crc kubenswrapper[4895]: W1206 09:12:37.231586 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod542a8fcd_d1af_493c_ae31_c370a4f5d38c.slice/crio-71120c29bfe336b8ec47107ada880780810f1b50b6c3fb34f128c4cb4204b7d3 WatchSource:0}: Error finding container 71120c29bfe336b8ec47107ada880780810f1b50b6c3fb34f128c4cb4204b7d3: Status 404 returned error can't find the container with id 71120c29bfe336b8ec47107ada880780810f1b50b6c3fb34f128c4cb4204b7d3 Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.392356 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx"] Dec 06 09:12:37 crc kubenswrapper[4895]: W1206 09:12:37.400175 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d8afee3_d9f0_42ac_b2e9_89472dfec610.slice/crio-ef6ac3ad1646ce3a1694134b37ab46d7c56c2bd8edd6b6e2e6386c37c9889d91 WatchSource:0}: Error finding container ef6ac3ad1646ce3a1694134b37ab46d7c56c2bd8edd6b6e2e6386c37c9889d91: Status 404 returned error can't find the container with id ef6ac3ad1646ce3a1694134b37ab46d7c56c2bd8edd6b6e2e6386c37c9889d91 Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.494895 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-fmg6b"] Dec 06 09:12:37 crc kubenswrapper[4895]: W1206 09:12:37.503832 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3cdf03d_0a52_43df_a589_7312ba4056ed.slice/crio-6c300e859410aae7e350f52a31a4f224796939513d5259ec65d0da6b111fef3b WatchSource:0}: Error finding container 6c300e859410aae7e350f52a31a4f224796939513d5259ec65d0da6b111fef3b: Status 404 returned error can't find the container with id 6c300e859410aae7e350f52a31a4f224796939513d5259ec65d0da6b111fef3b Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.593004 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" event={"ID":"542a8fcd-d1af-493c-ae31-c370a4f5d38c","Type":"ContainerStarted","Data":"71120c29bfe336b8ec47107ada880780810f1b50b6c3fb34f128c4cb4204b7d3"} Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.602959 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" event={"ID":"7d8afee3-d9f0-42ac-b2e9-89472dfec610","Type":"ContainerStarted","Data":"ef6ac3ad1646ce3a1694134b37ab46d7c56c2bd8edd6b6e2e6386c37c9889d91"} Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.617177 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" event={"ID":"6a06d981-e38f-4b27-b597-271076759c4b","Type":"ContainerStarted","Data":"0cc85676703e1f57bfd9801b48180bfc1309234553a431df1e261cc4dd78aa76"} Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.624167 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" event={"ID":"b3cdf03d-0a52-43df-a589-7312ba4056ed","Type":"ContainerStarted","Data":"6c300e859410aae7e350f52a31a4f224796939513d5259ec65d0da6b111fef3b"} Dec 06 09:12:37 crc kubenswrapper[4895]: I1206 09:12:37.665946 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-btk6w"] Dec 06 09:12:37 crc kubenswrapper[4895]: W1206 09:12:37.684886 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9b65d4_80d1_423e_8a9f_0786e18d0b00.slice/crio-eff5e09532ba9b0c3ade1607b4719ed9607768a254c5c06a2be302bd43ad8ea2 WatchSource:0}: Error finding container eff5e09532ba9b0c3ade1607b4719ed9607768a254c5c06a2be302bd43ad8ea2: Status 404 returned error can't find the container with id eff5e09532ba9b0c3ade1607b4719ed9607768a254c5c06a2be302bd43ad8ea2 Dec 06 09:12:38 crc kubenswrapper[4895]: I1206 09:12:38.636753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-btk6w" event={"ID":"7c9b65d4-80d1-423e-8a9f-0786e18d0b00","Type":"ContainerStarted","Data":"eff5e09532ba9b0c3ade1607b4719ed9607768a254c5c06a2be302bd43ad8ea2"} Dec 06 09:12:44 crc kubenswrapper[4895]: I1206 09:12:44.051977 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:12:44 crc kubenswrapper[4895]: E1206 09:12:44.052852 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.046415 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-k9fbb"] Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.069204 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2752-account-create-update-hcfbv"] Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.075562 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-k9fbb"] Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.083551 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2752-account-create-update-hcfbv"] Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.770510 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" event={"ID":"7d8afee3-d9f0-42ac-b2e9-89472dfec610","Type":"ContainerStarted","Data":"4f8c02fc1b36ca5ed43e894035da18de8e63d519b34e6a68545978b94997aaad"} Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.774950 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" event={"ID":"6a06d981-e38f-4b27-b597-271076759c4b","Type":"ContainerStarted","Data":"f9071e706d8306edc805639386f14fb8b440fa360ece6d7c727d93329681181c"} Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.784855 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-btk6w" event={"ID":"7c9b65d4-80d1-423e-8a9f-0786e18d0b00","Type":"ContainerStarted","Data":"c67313e4d85ca261b035ae4cc78924cdc4261f50f08069f87e394152a48b7d02"} Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.785854 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.794966 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" event={"ID":"b3cdf03d-0a52-43df-a589-7312ba4056ed","Type":"ContainerStarted","Data":"09a94ea06292b3702ec893bb136367e84b3015dfa1568978824f882baf1ba799"} Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.796454 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.801576 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.810280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" event={"ID":"542a8fcd-d1af-493c-ae31-c370a4f5d38c","Type":"ContainerStarted","Data":"396d51d210bc39dfb26742f1993d22bde26b627866c1a2b9a00409e73a1ef6c9"} Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.817155 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-6szsx" podStartSLOduration=2.796201638 podStartE2EDuration="10.817129281s" podCreationTimestamp="2025-12-06 09:12:36 +0000 UTC" firstStartedPulling="2025-12-06 09:12:37.403315566 +0000 UTC m=+8119.804704436" lastFinishedPulling="2025-12-06 09:12:45.424243209 +0000 UTC m=+8127.825632079" observedRunningTime="2025-12-06 09:12:46.796932398 +0000 UTC m=+8129.198321288" watchObservedRunningTime="2025-12-06 09:12:46.817129281 +0000 UTC m=+8129.218518151" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.859645 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-btk6w" podStartSLOduration=3.14318683 podStartE2EDuration="10.859627093s" podCreationTimestamp="2025-12-06 09:12:36 +0000 UTC" firstStartedPulling="2025-12-06 09:12:37.710070487 +0000 UTC m=+8120.111459357" lastFinishedPulling="2025-12-06 09:12:45.42651072 +0000 UTC m=+8127.827899620" observedRunningTime="2025-12-06 09:12:46.833864341 +0000 UTC m=+8129.235253221" watchObservedRunningTime="2025-12-06 09:12:46.859627093 +0000 UTC m=+8129.261015963" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.879702 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-8ffvz" podStartSLOduration=2.475587724 podStartE2EDuration="10.879679601s" podCreationTimestamp="2025-12-06 09:12:36 +0000 UTC" firstStartedPulling="2025-12-06 09:12:37.070804863 +0000 UTC m=+8119.472193733" lastFinishedPulling="2025-12-06 09:12:45.47489675 +0000 UTC m=+8127.876285610" observedRunningTime="2025-12-06 09:12:46.85504404 +0000 UTC m=+8129.256432910" watchObservedRunningTime="2025-12-06 09:12:46.879679601 +0000 UTC m=+8129.281068461" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.892918 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-fmg6b" podStartSLOduration=2.888341614 podStartE2EDuration="10.892900147s" podCreationTimestamp="2025-12-06 09:12:36 +0000 UTC" firstStartedPulling="2025-12-06 09:12:37.507044263 +0000 UTC m=+8119.908433133" lastFinishedPulling="2025-12-06 09:12:45.511602796 +0000 UTC m=+8127.912991666" observedRunningTime="2025-12-06 09:12:46.892703232 +0000 UTC m=+8129.294092102" watchObservedRunningTime="2025-12-06 09:12:46.892900147 +0000 UTC m=+8129.294289017" Dec 06 09:12:46 crc kubenswrapper[4895]: I1206 09:12:46.944976 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7789df6f88-z5j98" podStartSLOduration=2.753863071 podStartE2EDuration="10.944952195s" podCreationTimestamp="2025-12-06 09:12:36 +0000 UTC" firstStartedPulling="2025-12-06 09:12:37.233685609 +0000 UTC m=+8119.635074479" lastFinishedPulling="2025-12-06 09:12:45.424774733 +0000 UTC m=+8127.826163603" observedRunningTime="2025-12-06 09:12:46.917431866 +0000 UTC m=+8129.318820736" watchObservedRunningTime="2025-12-06 09:12:46.944952195 +0000 UTC m=+8129.346341065" Dec 06 09:12:48 crc kubenswrapper[4895]: I1206 09:12:48.071747 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630be3a5-6987-49ca-85a2-025e93a1ae43" path="/var/lib/kubelet/pods/630be3a5-6987-49ca-85a2-025e93a1ae43/volumes" Dec 06 09:12:48 crc kubenswrapper[4895]: I1206 09:12:48.074006 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c6a462-5b00-4390-a172-1cabe4d68a37" path="/var/lib/kubelet/pods/66c6a462-5b00-4390-a172-1cabe4d68a37/volumes" Dec 06 09:12:55 crc kubenswrapper[4895]: I1206 09:12:55.034054 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-42mmj"] Dec 06 09:12:55 crc kubenswrapper[4895]: I1206 09:12:55.044590 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-42mmj"] Dec 06 09:12:56 crc kubenswrapper[4895]: I1206 09:12:56.065552 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba49232f-1681-4743-8d42-d264a39df476" path="/var/lib/kubelet/pods/ba49232f-1681-4743-8d42-d264a39df476/volumes" Dec 06 09:12:57 crc kubenswrapper[4895]: I1206 09:12:57.015433 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-btk6w" Dec 06 09:12:57 crc kubenswrapper[4895]: I1206 09:12:57.051255 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:12:57 crc kubenswrapper[4895]: E1206 09:12:57.051510 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.539854 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.541138 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="c202be77-0d5f-48a3-82dd-119617062782" containerName="openstackclient" containerID="cri-o://279409f0fb76568c28cfc02adc9cd75122a69f1847a444b00a49e1bcadb31e66" gracePeriod=2 Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.547521 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.627382 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 09:12:59 crc kubenswrapper[4895]: E1206 09:12:59.627890 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c202be77-0d5f-48a3-82dd-119617062782" containerName="openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.627909 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c202be77-0d5f-48a3-82dd-119617062782" containerName="openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.628098 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c202be77-0d5f-48a3-82dd-119617062782" containerName="openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.628791 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.656338 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c202be77-0d5f-48a3-82dd-119617062782" podUID="22156626-bc35-4e1c-969c-7cdc2a169cb9" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.657599 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.770002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdm8s\" (UniqueName: \"kubernetes.io/projected/22156626-bc35-4e1c-969c-7cdc2a169cb9-kube-api-access-bdm8s\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.770102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22156626-bc35-4e1c-969c-7cdc2a169cb9-openstack-config\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.770204 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22156626-bc35-4e1c-969c-7cdc2a169cb9-openstack-config-secret\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.790956 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.792205 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.797189 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wl826" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.816704 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.871856 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrb6\" (UniqueName: \"kubernetes.io/projected/80d4dc9d-aaf8-4759-9d8e-67539ecf21f2-kube-api-access-mwrb6\") pod \"kube-state-metrics-0\" (UID: \"80d4dc9d-aaf8-4759-9d8e-67539ecf21f2\") " pod="openstack/kube-state-metrics-0" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.871946 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22156626-bc35-4e1c-969c-7cdc2a169cb9-openstack-config-secret\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.872054 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdm8s\" (UniqueName: \"kubernetes.io/projected/22156626-bc35-4e1c-969c-7cdc2a169cb9-kube-api-access-bdm8s\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.872103 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22156626-bc35-4e1c-969c-7cdc2a169cb9-openstack-config\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.873140 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22156626-bc35-4e1c-969c-7cdc2a169cb9-openstack-config\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.879154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22156626-bc35-4e1c-969c-7cdc2a169cb9-openstack-config-secret\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.901134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdm8s\" (UniqueName: \"kubernetes.io/projected/22156626-bc35-4e1c-969c-7cdc2a169cb9-kube-api-access-bdm8s\") pod \"openstackclient\" (UID: \"22156626-bc35-4e1c-969c-7cdc2a169cb9\") " pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.949259 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 09:12:59 crc kubenswrapper[4895]: I1206 09:12:59.977718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrb6\" (UniqueName: \"kubernetes.io/projected/80d4dc9d-aaf8-4759-9d8e-67539ecf21f2-kube-api-access-mwrb6\") pod \"kube-state-metrics-0\" (UID: \"80d4dc9d-aaf8-4759-9d8e-67539ecf21f2\") " pod="openstack/kube-state-metrics-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.047853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrb6\" (UniqueName: \"kubernetes.io/projected/80d4dc9d-aaf8-4759-9d8e-67539ecf21f2-kube-api-access-mwrb6\") pod \"kube-state-metrics-0\" (UID: \"80d4dc9d-aaf8-4759-9d8e-67539ecf21f2\") " pod="openstack/kube-state-metrics-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.115371 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.600181 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.603435 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.624090 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.624504 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-wnhp4" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.632319 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.632593 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.634189 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.664413 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q544f\" (UniqueName: \"kubernetes.io/projected/092f092f-4678-4f00-ab6a-162eed935527-kube-api-access-q544f\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/092f092f-4678-4f00-ab6a-162eed935527-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694694 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/092f092f-4678-4f00-ab6a-162eed935527-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694719 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/092f092f-4678-4f00-ab6a-162eed935527-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694844 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.694891 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.795928 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797018 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797080 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q544f\" (UniqueName: \"kubernetes.io/projected/092f092f-4678-4f00-ab6a-162eed935527-kube-api-access-q544f\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797107 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/092f092f-4678-4f00-ab6a-162eed935527-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797125 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/092f092f-4678-4f00-ab6a-162eed935527-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797146 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/092f092f-4678-4f00-ab6a-162eed935527-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.797254 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.800996 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/092f092f-4678-4f00-ab6a-162eed935527-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.809009 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/092f092f-4678-4f00-ab6a-162eed935527-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.813977 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.815630 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/092f092f-4678-4f00-ab6a-162eed935527-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.821831 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.830244 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q544f\" (UniqueName: \"kubernetes.io/projected/092f092f-4678-4f00-ab6a-162eed935527-kube-api-access-q544f\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.830802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/092f092f-4678-4f00-ab6a-162eed935527-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"092f092f-4678-4f00-ab6a-162eed935527\") " pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.960107 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"80d4dc9d-aaf8-4759-9d8e-67539ecf21f2","Type":"ContainerStarted","Data":"05ac14eec7e451884041604a34f5bd5d8ed896e330eddac7399360fc579b61c1"} Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.963234 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 09:13:00 crc kubenswrapper[4895]: I1206 09:13:00.988932 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.209970 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.222259 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.242030 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.242245 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.242348 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.242446 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.242645 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jqf2r" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.248021 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.277083 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.305920 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.306031 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/57edb652-7803-4ccb-8c17-68623b1b3e6f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.306074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-config\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.310838 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/57edb652-7803-4ccb-8c17-68623b1b3e6f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.310891 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.310911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l256q\" (UniqueName: \"kubernetes.io/projected/57edb652-7803-4ccb-8c17-68623b1b3e6f-kube-api-access-l256q\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.310949 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.311007 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/57edb652-7803-4ccb-8c17-68623b1b3e6f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.413821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/57edb652-7803-4ccb-8c17-68623b1b3e6f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.413953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.414017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/57edb652-7803-4ccb-8c17-68623b1b3e6f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.414059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-config\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.414111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/57edb652-7803-4ccb-8c17-68623b1b3e6f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.414162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.414199 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l256q\" (UniqueName: \"kubernetes.io/projected/57edb652-7803-4ccb-8c17-68623b1b3e6f-kube-api-access-l256q\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.414228 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.421287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/57edb652-7803-4ccb-8c17-68623b1b3e6f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.421845 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/57edb652-7803-4ccb-8c17-68623b1b3e6f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.435528 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-config\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.439679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.445383 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.445424 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1481d4bf2cab3f098c1433eba44ebe22c6a36fa1ec4ae68a840253044805907/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.503552 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/57edb652-7803-4ccb-8c17-68623b1b3e6f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.505221 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/57edb652-7803-4ccb-8c17-68623b1b3e6f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.512780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l256q\" (UniqueName: \"kubernetes.io/projected/57edb652-7803-4ccb-8c17-68623b1b3e6f-kube-api-access-l256q\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.823578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e7ec94a-bc4d-4169-934b-aa220febfa97\") pod \"prometheus-metric-storage-0\" (UID: \"57edb652-7803-4ccb-8c17-68623b1b3e6f\") " pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.878946 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.883668 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 06 09:13:01 crc kubenswrapper[4895]: I1206 09:13:01.994815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"092f092f-4678-4f00-ab6a-162eed935527","Type":"ContainerStarted","Data":"8c951dd14a5cfa7972e6dcc1e67b1ba104b290ce6a67c69757257a0c1235d129"} Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.013731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"22156626-bc35-4e1c-969c-7cdc2a169cb9","Type":"ContainerStarted","Data":"4f8423b59c8f964eb690cfc032a05f9ca9f0b793b10c14b414c273e8e62e15d4"} Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.017440 4895 generic.go:334] "Generic (PLEG): container finished" podID="c202be77-0d5f-48a3-82dd-119617062782" containerID="279409f0fb76568c28cfc02adc9cd75122a69f1847a444b00a49e1bcadb31e66" exitCode=137 Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.151581 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.278717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc4vb\" (UniqueName: \"kubernetes.io/projected/c202be77-0d5f-48a3-82dd-119617062782-kube-api-access-gc4vb\") pod \"c202be77-0d5f-48a3-82dd-119617062782\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.278763 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c202be77-0d5f-48a3-82dd-119617062782-openstack-config\") pod \"c202be77-0d5f-48a3-82dd-119617062782\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.278859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c202be77-0d5f-48a3-82dd-119617062782-openstack-config-secret\") pod \"c202be77-0d5f-48a3-82dd-119617062782\" (UID: \"c202be77-0d5f-48a3-82dd-119617062782\") " Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.284632 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c202be77-0d5f-48a3-82dd-119617062782-kube-api-access-gc4vb" (OuterVolumeSpecName: "kube-api-access-gc4vb") pod "c202be77-0d5f-48a3-82dd-119617062782" (UID: "c202be77-0d5f-48a3-82dd-119617062782"). InnerVolumeSpecName "kube-api-access-gc4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.320161 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c202be77-0d5f-48a3-82dd-119617062782-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c202be77-0d5f-48a3-82dd-119617062782" (UID: "c202be77-0d5f-48a3-82dd-119617062782"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.336895 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c202be77-0d5f-48a3-82dd-119617062782-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c202be77-0d5f-48a3-82dd-119617062782" (UID: "c202be77-0d5f-48a3-82dd-119617062782"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.381000 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c202be77-0d5f-48a3-82dd-119617062782-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.381039 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc4vb\" (UniqueName: \"kubernetes.io/projected/c202be77-0d5f-48a3-82dd-119617062782-kube-api-access-gc4vb\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.381052 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c202be77-0d5f-48a3-82dd-119617062782-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:02 crc kubenswrapper[4895]: I1206 09:13:02.502450 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 09:13:02 crc kubenswrapper[4895]: W1206 09:13:02.502830 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57edb652_7803_4ccb_8c17_68623b1b3e6f.slice/crio-6fbfdcfa2ef9ebac9a16bab0487b5f8c08aace64e5f935888f23b86a32d17ec2 WatchSource:0}: Error finding container 6fbfdcfa2ef9ebac9a16bab0487b5f8c08aace64e5f935888f23b86a32d17ec2: Status 404 returned error can't find the container with id 6fbfdcfa2ef9ebac9a16bab0487b5f8c08aace64e5f935888f23b86a32d17ec2 Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.029257 4895 scope.go:117] "RemoveContainer" containerID="279409f0fb76568c28cfc02adc9cd75122a69f1847a444b00a49e1bcadb31e66" Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.029273 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.031054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"22156626-bc35-4e1c-969c-7cdc2a169cb9","Type":"ContainerStarted","Data":"789e7a75dde2b68b115e1c4824b4e0d787553f9d3301b7f117806b06eb2c6a60"} Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.033121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"57edb652-7803-4ccb-8c17-68623b1b3e6f","Type":"ContainerStarted","Data":"6fbfdcfa2ef9ebac9a16bab0487b5f8c08aace64e5f935888f23b86a32d17ec2"} Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.034708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"80d4dc9d-aaf8-4759-9d8e-67539ecf21f2","Type":"ContainerStarted","Data":"7d1c5e5c612ac21f1f036a3a783496bc19da8618dc0387e490926c7fa07713e2"} Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.035464 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.056245 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.056222508 podStartE2EDuration="4.056222508s" podCreationTimestamp="2025-12-06 09:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:13:03.049137648 +0000 UTC m=+8145.450526508" watchObservedRunningTime="2025-12-06 09:13:03.056222508 +0000 UTC m=+8145.457611378" Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.069099 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c202be77-0d5f-48a3-82dd-119617062782" podUID="22156626-bc35-4e1c-969c-7cdc2a169cb9" Dec 06 09:13:03 crc kubenswrapper[4895]: I1206 09:13:03.075312 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.371638994 podStartE2EDuration="4.075283789s" podCreationTimestamp="2025-12-06 09:12:59 +0000 UTC" firstStartedPulling="2025-12-06 09:13:00.842136802 +0000 UTC m=+8143.243525672" lastFinishedPulling="2025-12-06 09:13:01.545781597 +0000 UTC m=+8143.947170467" observedRunningTime="2025-12-06 09:13:03.065586299 +0000 UTC m=+8145.466975169" watchObservedRunningTime="2025-12-06 09:13:03.075283789 +0000 UTC m=+8145.476672659" Dec 06 09:13:04 crc kubenswrapper[4895]: I1206 09:13:04.061432 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c202be77-0d5f-48a3-82dd-119617062782" path="/var/lib/kubelet/pods/c202be77-0d5f-48a3-82dd-119617062782/volumes" Dec 06 09:13:09 crc kubenswrapper[4895]: I1206 09:13:09.051573 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:13:09 crc kubenswrapper[4895]: E1206 09:13:09.052367 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:13:09 crc kubenswrapper[4895]: I1206 09:13:09.104303 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"57edb652-7803-4ccb-8c17-68623b1b3e6f","Type":"ContainerStarted","Data":"df16c864fdeba42b462ada4336fd49ef1d71685c667d44b7f569ddb30b4e2e41"} Dec 06 09:13:09 crc kubenswrapper[4895]: I1206 09:13:09.107844 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"092f092f-4678-4f00-ab6a-162eed935527","Type":"ContainerStarted","Data":"3a0e4d3b50842986ec7637935925af2ab844b351be736c968a72f34ba1653329"} Dec 06 09:13:10 crc kubenswrapper[4895]: I1206 09:13:10.120680 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 09:13:17 crc kubenswrapper[4895]: I1206 09:13:17.514573 4895 scope.go:117] "RemoveContainer" containerID="fa6d3d4bffd3fbcaf2e48c5b44064182a30cc9d8321b9a4458ab05b317f356ed" Dec 06 09:13:17 crc kubenswrapper[4895]: I1206 09:13:17.580435 4895 scope.go:117] "RemoveContainer" containerID="701f968e6ac3b6b45211497296591fb20e81ff0da28d2d55a0296adf8e8dc0e4" Dec 06 09:13:17 crc kubenswrapper[4895]: I1206 09:13:17.622869 4895 scope.go:117] "RemoveContainer" containerID="e959fcfefc47930b241d68d8923f20ec3c09b8e0d6f6805ca8e8988710941866" Dec 06 09:13:17 crc kubenswrapper[4895]: I1206 09:13:17.679023 4895 scope.go:117] "RemoveContainer" containerID="c1d1c0631c35156bf85b18747ad5cb8ee807314b7ab4e8eb4d5cf4a73b8fb981" Dec 06 09:13:18 crc kubenswrapper[4895]: I1206 09:13:18.215375 4895 generic.go:334] "Generic (PLEG): container finished" podID="57edb652-7803-4ccb-8c17-68623b1b3e6f" containerID="df16c864fdeba42b462ada4336fd49ef1d71685c667d44b7f569ddb30b4e2e41" exitCode=0 Dec 06 09:13:18 crc kubenswrapper[4895]: I1206 09:13:18.215449 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"57edb652-7803-4ccb-8c17-68623b1b3e6f","Type":"ContainerDied","Data":"df16c864fdeba42b462ada4336fd49ef1d71685c667d44b7f569ddb30b4e2e41"} Dec 06 09:13:18 crc kubenswrapper[4895]: I1206 09:13:18.219790 4895 generic.go:334] "Generic (PLEG): container finished" podID="092f092f-4678-4f00-ab6a-162eed935527" containerID="3a0e4d3b50842986ec7637935925af2ab844b351be736c968a72f34ba1653329" exitCode=0 Dec 06 09:13:18 crc kubenswrapper[4895]: I1206 09:13:18.219859 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"092f092f-4678-4f00-ab6a-162eed935527","Type":"ContainerDied","Data":"3a0e4d3b50842986ec7637935925af2ab844b351be736c968a72f34ba1653329"} Dec 06 09:13:23 crc kubenswrapper[4895]: I1206 09:13:23.050383 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:13:23 crc kubenswrapper[4895]: E1206 09:13:23.052070 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:13:25 crc kubenswrapper[4895]: I1206 09:13:25.340663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"57edb652-7803-4ccb-8c17-68623b1b3e6f","Type":"ContainerStarted","Data":"84d50cbc63c2f4e7ccb82086cd731acc717e2c94277cf8493797030653badf56"} Dec 06 09:13:25 crc kubenswrapper[4895]: I1206 09:13:25.343615 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"092f092f-4678-4f00-ab6a-162eed935527","Type":"ContainerStarted","Data":"cc675e8268c448cc6b2104aa9fdf228fe8fd8e7602e19a6665edad99aab3b245"} Dec 06 09:13:29 crc kubenswrapper[4895]: I1206 09:13:29.396119 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"57edb652-7803-4ccb-8c17-68623b1b3e6f","Type":"ContainerStarted","Data":"778bef82f4709fb6335ebc89de176a1ddd4771d255c40aa50d3e632417964d9a"} Dec 06 09:13:29 crc kubenswrapper[4895]: I1206 09:13:29.398851 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"092f092f-4678-4f00-ab6a-162eed935527","Type":"ContainerStarted","Data":"7d84600a00082f316c84924d5bad1973bb4c7f0aafef99a2ecb772209f8392e7"} Dec 06 09:13:29 crc kubenswrapper[4895]: I1206 09:13:29.399424 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:29 crc kubenswrapper[4895]: I1206 09:13:29.402577 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 06 09:13:29 crc kubenswrapper[4895]: I1206 09:13:29.442644 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.069708115 podStartE2EDuration="29.442616406s" podCreationTimestamp="2025-12-06 09:13:00 +0000 UTC" firstStartedPulling="2025-12-06 09:13:01.935860528 +0000 UTC m=+8144.337249398" lastFinishedPulling="2025-12-06 09:13:24.308768809 +0000 UTC m=+8166.710157689" observedRunningTime="2025-12-06 09:13:29.426074511 +0000 UTC m=+8171.827463391" watchObservedRunningTime="2025-12-06 09:13:29.442616406 +0000 UTC m=+8171.844005316" Dec 06 09:13:32 crc kubenswrapper[4895]: I1206 09:13:32.445864 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"57edb652-7803-4ccb-8c17-68623b1b3e6f","Type":"ContainerStarted","Data":"c57b0fb35068769c76d9777adefc553e6dec24aef2d045907d39e77ed3d5f04a"} Dec 06 09:13:32 crc kubenswrapper[4895]: I1206 09:13:32.509161 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.2955379369999998 podStartE2EDuration="32.509132982s" podCreationTimestamp="2025-12-06 09:13:00 +0000 UTC" firstStartedPulling="2025-12-06 09:13:02.50505861 +0000 UTC m=+8144.906447480" lastFinishedPulling="2025-12-06 09:13:31.718653655 +0000 UTC m=+8174.120042525" observedRunningTime="2025-12-06 09:13:32.484258275 +0000 UTC m=+8174.885647195" watchObservedRunningTime="2025-12-06 09:13:32.509132982 +0000 UTC m=+8174.910521872" Dec 06 09:13:36 crc kubenswrapper[4895]: I1206 09:13:36.880136 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:37 crc kubenswrapper[4895]: I1206 09:13:37.052133 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:13:37 crc kubenswrapper[4895]: E1206 09:13:37.052421 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:13:37 crc kubenswrapper[4895]: I1206 09:13:37.054935 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-75f1-account-create-update-zz5tm"] Dec 06 09:13:37 crc kubenswrapper[4895]: I1206 09:13:37.064144 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8mbl9"] Dec 06 09:13:37 crc kubenswrapper[4895]: I1206 09:13:37.073220 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8mbl9"] Dec 06 09:13:37 crc kubenswrapper[4895]: I1206 09:13:37.081186 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-75f1-account-create-update-zz5tm"] Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.070335 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52454b44-71f8-41a4-ac75-90812c004863" path="/var/lib/kubelet/pods/52454b44-71f8-41a4-ac75-90812c004863/volumes" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.071015 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be403b2-d441-469e-9b7c-3180922cf7df" path="/var/lib/kubelet/pods/5be403b2-d441-469e-9b7c-3180922cf7df/volumes" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.581338 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.584595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.587703 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.587952 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.595016 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.733687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.733755 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.733776 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-log-httpd\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.734099 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-scripts\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.734159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-run-httpd\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.734233 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-config-data\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.734309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgrh\" (UniqueName: \"kubernetes.io/projected/fad235e5-556f-4bf4-93c7-f89fb4452401-kube-api-access-7jgrh\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836041 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-run-httpd\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836126 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-config-data\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgrh\" (UniqueName: \"kubernetes.io/projected/fad235e5-556f-4bf4-93c7-f89fb4452401-kube-api-access-7jgrh\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836309 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-log-httpd\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.836437 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-scripts\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.837183 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-run-httpd\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.837352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-log-httpd\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.843238 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-scripts\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.843570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.845143 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-config-data\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.849987 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.858162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgrh\" (UniqueName: \"kubernetes.io/projected/fad235e5-556f-4bf4-93c7-f89fb4452401-kube-api-access-7jgrh\") pod \"ceilometer-0\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " pod="openstack/ceilometer-0" Dec 06 09:13:38 crc kubenswrapper[4895]: I1206 09:13:38.908242 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:13:39 crc kubenswrapper[4895]: I1206 09:13:39.482220 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:13:39 crc kubenswrapper[4895]: W1206 09:13:39.483049 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfad235e5_556f_4bf4_93c7_f89fb4452401.slice/crio-aaa40bcf5c074521eef3c4c1875d4b3841479b908afbb97fb56a06cf97a02e8e WatchSource:0}: Error finding container aaa40bcf5c074521eef3c4c1875d4b3841479b908afbb97fb56a06cf97a02e8e: Status 404 returned error can't find the container with id aaa40bcf5c074521eef3c4c1875d4b3841479b908afbb97fb56a06cf97a02e8e Dec 06 09:13:39 crc kubenswrapper[4895]: I1206 09:13:39.556279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerStarted","Data":"aaa40bcf5c074521eef3c4c1875d4b3841479b908afbb97fb56a06cf97a02e8e"} Dec 06 09:13:44 crc kubenswrapper[4895]: I1206 09:13:44.620323 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerStarted","Data":"996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e"} Dec 06 09:13:45 crc kubenswrapper[4895]: I1206 09:13:45.632246 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerStarted","Data":"7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269"} Dec 06 09:13:46 crc kubenswrapper[4895]: I1206 09:13:46.648336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerStarted","Data":"92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7"} Dec 06 09:13:46 crc kubenswrapper[4895]: I1206 09:13:46.880324 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:46 crc kubenswrapper[4895]: I1206 09:13:46.882814 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:47 crc kubenswrapper[4895]: I1206 09:13:47.660088 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 09:13:48 crc kubenswrapper[4895]: I1206 09:13:48.706730 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerStarted","Data":"2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd"} Dec 06 09:13:48 crc kubenswrapper[4895]: I1206 09:13:48.707108 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 09:13:48 crc kubenswrapper[4895]: I1206 09:13:48.742332 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.437756835 podStartE2EDuration="10.742294728s" podCreationTimestamp="2025-12-06 09:13:38 +0000 UTC" firstStartedPulling="2025-12-06 09:13:39.486585451 +0000 UTC m=+8181.887974321" lastFinishedPulling="2025-12-06 09:13:47.791123334 +0000 UTC m=+8190.192512214" observedRunningTime="2025-12-06 09:13:48.731256391 +0000 UTC m=+8191.132645261" watchObservedRunningTime="2025-12-06 09:13:48.742294728 +0000 UTC m=+8191.143683598" Dec 06 09:13:49 crc kubenswrapper[4895]: I1206 09:13:49.052965 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:13:49 crc kubenswrapper[4895]: E1206 09:13:49.053563 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.854229 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-wnxmc"] Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.856010 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.861876 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-wnxmc"] Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.948668 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-7610-account-create-update-lg6rx"] Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.950050 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.953248 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 06 09:13:55 crc kubenswrapper[4895]: I1206 09:13:55.958075 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7610-account-create-update-lg6rx"] Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.025732 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8sc\" (UniqueName: \"kubernetes.io/projected/51aabd79-fd16-4e6d-b565-4815c6538cad-kube-api-access-dj8sc\") pod \"aodh-db-create-wnxmc\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.026133 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aabd79-fd16-4e6d-b565-4815c6538cad-operator-scripts\") pod \"aodh-db-create-wnxmc\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.127771 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87cw\" (UniqueName: \"kubernetes.io/projected/a3b0577e-9827-434b-9d5d-2ecba8296ac7-kube-api-access-j87cw\") pod \"aodh-7610-account-create-update-lg6rx\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.127825 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aabd79-fd16-4e6d-b565-4815c6538cad-operator-scripts\") pod \"aodh-db-create-wnxmc\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.127909 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b0577e-9827-434b-9d5d-2ecba8296ac7-operator-scripts\") pod \"aodh-7610-account-create-update-lg6rx\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.127948 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8sc\" (UniqueName: \"kubernetes.io/projected/51aabd79-fd16-4e6d-b565-4815c6538cad-kube-api-access-dj8sc\") pod \"aodh-db-create-wnxmc\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.129274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aabd79-fd16-4e6d-b565-4815c6538cad-operator-scripts\") pod \"aodh-db-create-wnxmc\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.172847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8sc\" (UniqueName: \"kubernetes.io/projected/51aabd79-fd16-4e6d-b565-4815c6538cad-kube-api-access-dj8sc\") pod \"aodh-db-create-wnxmc\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.176031 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.229547 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87cw\" (UniqueName: \"kubernetes.io/projected/a3b0577e-9827-434b-9d5d-2ecba8296ac7-kube-api-access-j87cw\") pod \"aodh-7610-account-create-update-lg6rx\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.229847 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b0577e-9827-434b-9d5d-2ecba8296ac7-operator-scripts\") pod \"aodh-7610-account-create-update-lg6rx\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.230539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b0577e-9827-434b-9d5d-2ecba8296ac7-operator-scripts\") pod \"aodh-7610-account-create-update-lg6rx\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.266122 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87cw\" (UniqueName: \"kubernetes.io/projected/a3b0577e-9827-434b-9d5d-2ecba8296ac7-kube-api-access-j87cw\") pod \"aodh-7610-account-create-update-lg6rx\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.268286 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.860971 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-wnxmc"] Dec 06 09:13:56 crc kubenswrapper[4895]: I1206 09:13:56.931253 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7610-account-create-update-lg6rx"] Dec 06 09:13:56 crc kubenswrapper[4895]: W1206 09:13:56.945689 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3b0577e_9827_434b_9d5d_2ecba8296ac7.slice/crio-2883d52f393e8aee172a044be0c1a21c435322de5c80533bad526774ff61b6a3 WatchSource:0}: Error finding container 2883d52f393e8aee172a044be0c1a21c435322de5c80533bad526774ff61b6a3: Status 404 returned error can't find the container with id 2883d52f393e8aee172a044be0c1a21c435322de5c80533bad526774ff61b6a3 Dec 06 09:13:57 crc kubenswrapper[4895]: I1206 09:13:57.803464 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3b0577e-9827-434b-9d5d-2ecba8296ac7" containerID="de9a8f96313486a571565d42e67a3492ff87ef3cf137c54b3b8c9d9bd984325c" exitCode=0 Dec 06 09:13:57 crc kubenswrapper[4895]: I1206 09:13:57.803620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7610-account-create-update-lg6rx" event={"ID":"a3b0577e-9827-434b-9d5d-2ecba8296ac7","Type":"ContainerDied","Data":"de9a8f96313486a571565d42e67a3492ff87ef3cf137c54b3b8c9d9bd984325c"} Dec 06 09:13:57 crc kubenswrapper[4895]: I1206 09:13:57.803680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7610-account-create-update-lg6rx" event={"ID":"a3b0577e-9827-434b-9d5d-2ecba8296ac7","Type":"ContainerStarted","Data":"2883d52f393e8aee172a044be0c1a21c435322de5c80533bad526774ff61b6a3"} Dec 06 09:13:57 crc kubenswrapper[4895]: I1206 09:13:57.807776 4895 generic.go:334] "Generic (PLEG): container finished" podID="51aabd79-fd16-4e6d-b565-4815c6538cad" containerID="561f6e3e8c0e5e1aa622d74727a07ee1c4de8d2e4ae91632b5639f7bfca4976d" exitCode=0 Dec 06 09:13:57 crc kubenswrapper[4895]: I1206 09:13:57.807835 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wnxmc" event={"ID":"51aabd79-fd16-4e6d-b565-4815c6538cad","Type":"ContainerDied","Data":"561f6e3e8c0e5e1aa622d74727a07ee1c4de8d2e4ae91632b5639f7bfca4976d"} Dec 06 09:13:57 crc kubenswrapper[4895]: I1206 09:13:57.807878 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wnxmc" event={"ID":"51aabd79-fd16-4e6d-b565-4815c6538cad","Type":"ContainerStarted","Data":"eba9cf11f55a36596e88ffb19e106d84a3b7138c24f32d91ff19e7127c143803"} Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.376459 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.386339 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wnxmc" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.501376 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b0577e-9827-434b-9d5d-2ecba8296ac7-operator-scripts\") pod \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.501448 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj8sc\" (UniqueName: \"kubernetes.io/projected/51aabd79-fd16-4e6d-b565-4815c6538cad-kube-api-access-dj8sc\") pod \"51aabd79-fd16-4e6d-b565-4815c6538cad\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.501506 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j87cw\" (UniqueName: \"kubernetes.io/projected/a3b0577e-9827-434b-9d5d-2ecba8296ac7-kube-api-access-j87cw\") pod \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\" (UID: \"a3b0577e-9827-434b-9d5d-2ecba8296ac7\") " Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.501681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aabd79-fd16-4e6d-b565-4815c6538cad-operator-scripts\") pod \"51aabd79-fd16-4e6d-b565-4815c6538cad\" (UID: \"51aabd79-fd16-4e6d-b565-4815c6538cad\") " Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.502911 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51aabd79-fd16-4e6d-b565-4815c6538cad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51aabd79-fd16-4e6d-b565-4815c6538cad" (UID: "51aabd79-fd16-4e6d-b565-4815c6538cad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.502925 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b0577e-9827-434b-9d5d-2ecba8296ac7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3b0577e-9827-434b-9d5d-2ecba8296ac7" (UID: "a3b0577e-9827-434b-9d5d-2ecba8296ac7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.509743 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b0577e-9827-434b-9d5d-2ecba8296ac7-kube-api-access-j87cw" (OuterVolumeSpecName: "kube-api-access-j87cw") pod "a3b0577e-9827-434b-9d5d-2ecba8296ac7" (UID: "a3b0577e-9827-434b-9d5d-2ecba8296ac7"). InnerVolumeSpecName "kube-api-access-j87cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.509816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51aabd79-fd16-4e6d-b565-4815c6538cad-kube-api-access-dj8sc" (OuterVolumeSpecName: "kube-api-access-dj8sc") pod "51aabd79-fd16-4e6d-b565-4815c6538cad" (UID: "51aabd79-fd16-4e6d-b565-4815c6538cad"). InnerVolumeSpecName "kube-api-access-dj8sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.604063 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b0577e-9827-434b-9d5d-2ecba8296ac7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.604103 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj8sc\" (UniqueName: \"kubernetes.io/projected/51aabd79-fd16-4e6d-b565-4815c6538cad-kube-api-access-dj8sc\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.604117 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j87cw\" (UniqueName: \"kubernetes.io/projected/a3b0577e-9827-434b-9d5d-2ecba8296ac7-kube-api-access-j87cw\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.604131 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aabd79-fd16-4e6d-b565-4815c6538cad-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.835202 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7610-account-create-update-lg6rx" event={"ID":"a3b0577e-9827-434b-9d5d-2ecba8296ac7","Type":"ContainerDied","Data":"2883d52f393e8aee172a044be0c1a21c435322de5c80533bad526774ff61b6a3"} Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.835252 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2883d52f393e8aee172a044be0c1a21c435322de5c80533bad526774ff61b6a3" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.835262 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7610-account-create-update-lg6rx" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.837871 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wnxmc" event={"ID":"51aabd79-fd16-4e6d-b565-4815c6538cad","Type":"ContainerDied","Data":"eba9cf11f55a36596e88ffb19e106d84a3b7138c24f32d91ff19e7127c143803"} Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.837899 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba9cf11f55a36596e88ffb19e106d84a3b7138c24f32d91ff19e7127c143803" Dec 06 09:13:59 crc kubenswrapper[4895]: I1206 09:13:59.837941 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wnxmc" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.045705 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wtfgv"] Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.050773 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:14:01 crc kubenswrapper[4895]: E1206 09:14:01.051111 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.054773 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wtfgv"] Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.345934 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pnqmr"] Dec 06 09:14:01 crc kubenswrapper[4895]: E1206 09:14:01.346459 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51aabd79-fd16-4e6d-b565-4815c6538cad" containerName="mariadb-database-create" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.346502 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="51aabd79-fd16-4e6d-b565-4815c6538cad" containerName="mariadb-database-create" Dec 06 09:14:01 crc kubenswrapper[4895]: E1206 09:14:01.346530 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b0577e-9827-434b-9d5d-2ecba8296ac7" containerName="mariadb-account-create-update" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.346538 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b0577e-9827-434b-9d5d-2ecba8296ac7" containerName="mariadb-account-create-update" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.346797 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="51aabd79-fd16-4e6d-b565-4815c6538cad" containerName="mariadb-database-create" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.346829 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b0577e-9827-434b-9d5d-2ecba8296ac7" containerName="mariadb-account-create-update" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.347696 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.353673 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-8crbd" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.353731 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.354061 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.355035 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.360591 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pnqmr"] Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.440588 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-config-data\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.440831 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-combined-ca-bundle\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.440890 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg76v\" (UniqueName: \"kubernetes.io/projected/bd15f303-24c8-4075-b3c0-38a27527397f-kube-api-access-tg76v\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.441049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-scripts\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.543693 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-scripts\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.544366 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-config-data\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.544494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-combined-ca-bundle\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.544548 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg76v\" (UniqueName: \"kubernetes.io/projected/bd15f303-24c8-4075-b3c0-38a27527397f-kube-api-access-tg76v\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.568462 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-scripts\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.568585 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg76v\" (UniqueName: \"kubernetes.io/projected/bd15f303-24c8-4075-b3c0-38a27527397f-kube-api-access-tg76v\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.569020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-config-data\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.574757 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-combined-ca-bundle\") pod \"aodh-db-sync-pnqmr\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:01 crc kubenswrapper[4895]: I1206 09:14:01.667746 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:02 crc kubenswrapper[4895]: I1206 09:14:02.063205 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d9d645-1f6b-4f06-b0a3-7226d05a0199" path="/var/lib/kubelet/pods/e9d9d645-1f6b-4f06-b0a3-7226d05a0199/volumes" Dec 06 09:14:02 crc kubenswrapper[4895]: I1206 09:14:02.148928 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pnqmr"] Dec 06 09:14:02 crc kubenswrapper[4895]: W1206 09:14:02.151033 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd15f303_24c8_4075_b3c0_38a27527397f.slice/crio-f6583fab403afb3fa15d87f483978fc65d2f22bf6cbddd5aaf3cef61a0ea0487 WatchSource:0}: Error finding container f6583fab403afb3fa15d87f483978fc65d2f22bf6cbddd5aaf3cef61a0ea0487: Status 404 returned error can't find the container with id f6583fab403afb3fa15d87f483978fc65d2f22bf6cbddd5aaf3cef61a0ea0487 Dec 06 09:14:02 crc kubenswrapper[4895]: I1206 09:14:02.879698 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pnqmr" event={"ID":"bd15f303-24c8-4075-b3c0-38a27527397f","Type":"ContainerStarted","Data":"f6583fab403afb3fa15d87f483978fc65d2f22bf6cbddd5aaf3cef61a0ea0487"} Dec 06 09:14:08 crc kubenswrapper[4895]: I1206 09:14:08.918573 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 09:14:08 crc kubenswrapper[4895]: I1206 09:14:08.966985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pnqmr" event={"ID":"bd15f303-24c8-4075-b3c0-38a27527397f","Type":"ContainerStarted","Data":"ff6ecae779a5c935f9fa65af26272e41a2521e758d952dddbb389b74110f13eb"} Dec 06 09:14:09 crc kubenswrapper[4895]: I1206 09:14:09.014007 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pnqmr" podStartSLOduration=2.295488848 podStartE2EDuration="8.013973924s" podCreationTimestamp="2025-12-06 09:14:01 +0000 UTC" firstStartedPulling="2025-12-06 09:14:02.154422712 +0000 UTC m=+8204.555811582" lastFinishedPulling="2025-12-06 09:14:07.872907788 +0000 UTC m=+8210.274296658" observedRunningTime="2025-12-06 09:14:08.992752343 +0000 UTC m=+8211.394141243" watchObservedRunningTime="2025-12-06 09:14:09.013973924 +0000 UTC m=+8211.415362814" Dec 06 09:14:10 crc kubenswrapper[4895]: I1206 09:14:10.989394 4895 generic.go:334] "Generic (PLEG): container finished" podID="bd15f303-24c8-4075-b3c0-38a27527397f" containerID="ff6ecae779a5c935f9fa65af26272e41a2521e758d952dddbb389b74110f13eb" exitCode=0 Dec 06 09:14:10 crc kubenswrapper[4895]: I1206 09:14:10.989759 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pnqmr" event={"ID":"bd15f303-24c8-4075-b3c0-38a27527397f","Type":"ContainerDied","Data":"ff6ecae779a5c935f9fa65af26272e41a2521e758d952dddbb389b74110f13eb"} Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.479446 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.630981 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-combined-ca-bundle\") pod \"bd15f303-24c8-4075-b3c0-38a27527397f\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.631571 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-config-data\") pod \"bd15f303-24c8-4075-b3c0-38a27527397f\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.631710 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-scripts\") pod \"bd15f303-24c8-4075-b3c0-38a27527397f\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.631791 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg76v\" (UniqueName: \"kubernetes.io/projected/bd15f303-24c8-4075-b3c0-38a27527397f-kube-api-access-tg76v\") pod \"bd15f303-24c8-4075-b3c0-38a27527397f\" (UID: \"bd15f303-24c8-4075-b3c0-38a27527397f\") " Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.637525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd15f303-24c8-4075-b3c0-38a27527397f-kube-api-access-tg76v" (OuterVolumeSpecName: "kube-api-access-tg76v") pod "bd15f303-24c8-4075-b3c0-38a27527397f" (UID: "bd15f303-24c8-4075-b3c0-38a27527397f"). InnerVolumeSpecName "kube-api-access-tg76v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.639212 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-scripts" (OuterVolumeSpecName: "scripts") pod "bd15f303-24c8-4075-b3c0-38a27527397f" (UID: "bd15f303-24c8-4075-b3c0-38a27527397f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.675337 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd15f303-24c8-4075-b3c0-38a27527397f" (UID: "bd15f303-24c8-4075-b3c0-38a27527397f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.676661 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-config-data" (OuterVolumeSpecName: "config-data") pod "bd15f303-24c8-4075-b3c0-38a27527397f" (UID: "bd15f303-24c8-4075-b3c0-38a27527397f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.734744 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.734794 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg76v\" (UniqueName: \"kubernetes.io/projected/bd15f303-24c8-4075-b3c0-38a27527397f-kube-api-access-tg76v\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.734810 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:12 crc kubenswrapper[4895]: I1206 09:14:12.734822 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd15f303-24c8-4075-b3c0-38a27527397f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:13 crc kubenswrapper[4895]: I1206 09:14:13.014269 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pnqmr" event={"ID":"bd15f303-24c8-4075-b3c0-38a27527397f","Type":"ContainerDied","Data":"f6583fab403afb3fa15d87f483978fc65d2f22bf6cbddd5aaf3cef61a0ea0487"} Dec 06 09:14:13 crc kubenswrapper[4895]: I1206 09:14:13.014331 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pnqmr" Dec 06 09:14:13 crc kubenswrapper[4895]: I1206 09:14:13.014362 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6583fab403afb3fa15d87f483978fc65d2f22bf6cbddd5aaf3cef61a0ea0487" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.051565 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:14:15 crc kubenswrapper[4895]: E1206 09:14:15.052125 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.918528 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 06 09:14:15 crc kubenswrapper[4895]: E1206 09:14:15.918968 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd15f303-24c8-4075-b3c0-38a27527397f" containerName="aodh-db-sync" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.918986 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd15f303-24c8-4075-b3c0-38a27527397f" containerName="aodh-db-sync" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.919190 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd15f303-24c8-4075-b3c0-38a27527397f" containerName="aodh-db-sync" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.921106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.929778 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.930038 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-8crbd" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.930258 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 06 09:14:15 crc kubenswrapper[4895]: I1206 09:14:15.947776 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.003024 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-scripts\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.003223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.003258 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-config-data\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.003339 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ft9\" (UniqueName: \"kubernetes.io/projected/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-kube-api-access-45ft9\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.105192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.106042 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-config-data\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.106299 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ft9\" (UniqueName: \"kubernetes.io/projected/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-kube-api-access-45ft9\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.106563 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-scripts\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.110684 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-scripts\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.117454 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-config-data\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.122798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.126947 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ft9\" (UniqueName: \"kubernetes.io/projected/d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93-kube-api-access-45ft9\") pod \"aodh-0\" (UID: \"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93\") " pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.260052 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 06 09:14:16 crc kubenswrapper[4895]: I1206 09:14:16.749781 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 06 09:14:17 crc kubenswrapper[4895]: I1206 09:14:17.048737 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93","Type":"ContainerStarted","Data":"dc9c34c0d5a2568eb26f3389eac797d382622df261880a963b75ec9702d7cb2e"} Dec 06 09:14:17 crc kubenswrapper[4895]: I1206 09:14:17.877085 4895 scope.go:117] "RemoveContainer" containerID="efd4a6f1ede2cf836e3b4205d1ae6124b3c4bf7a6e49d7d2c407a1a4b7e35da4" Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.066680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93","Type":"ContainerStarted","Data":"b6b0cd4b725e930f2576b12a463f52bdc36fe20c5fe72d259edbf7eea341b79a"} Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.188812 4895 scope.go:117] "RemoveContainer" containerID="9a1115e734e1a5f186f349de8872cb6bee7f02841eb75120d5e8f3c3797cc168" Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.246275 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.246709 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-central-agent" containerID="cri-o://996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e" gracePeriod=30 Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.246750 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="proxy-httpd" containerID="cri-o://2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd" gracePeriod=30 Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.246822 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="sg-core" containerID="cri-o://92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7" gracePeriod=30 Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.246863 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-notification-agent" containerID="cri-o://7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269" gracePeriod=30 Dec 06 09:14:18 crc kubenswrapper[4895]: I1206 09:14:18.268740 4895 scope.go:117] "RemoveContainer" containerID="9b23613cd560f5a8925dc5a93974914a500cff1de24176756529f87b7f6401d3" Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.078130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93","Type":"ContainerStarted","Data":"0b90d7ab1eb7806d2eb189db775ddf49b8e73602f28c419776fb4a7f9ba13e02"} Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.083347 4895 generic.go:334] "Generic (PLEG): container finished" podID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerID="2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd" exitCode=0 Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.083690 4895 generic.go:334] "Generic (PLEG): container finished" podID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerID="92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7" exitCode=2 Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.083704 4895 generic.go:334] "Generic (PLEG): container finished" podID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerID="996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e" exitCode=0 Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.083503 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerDied","Data":"2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd"} Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.083764 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerDied","Data":"92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7"} Dec 06 09:14:19 crc kubenswrapper[4895]: I1206 09:14:19.083787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerDied","Data":"996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e"} Dec 06 09:14:20 crc kubenswrapper[4895]: I1206 09:14:20.101253 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93","Type":"ContainerStarted","Data":"b33bf78031a058805d728af3f661f20df2973bdf0ab66ccb8003b90edb3c865d"} Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.047086 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.122420 4895 generic.go:334] "Generic (PLEG): container finished" podID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerID="7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269" exitCode=0 Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.122508 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.122530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerDied","Data":"7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269"} Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fad235e5-556f-4bf4-93c7-f89fb4452401","Type":"ContainerDied","Data":"aaa40bcf5c074521eef3c4c1875d4b3841479b908afbb97fb56a06cf97a02e8e"} Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123361 4895 scope.go:117] "RemoveContainer" containerID="2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-scripts\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123440 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-config-data\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123485 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-sg-core-conf-yaml\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123544 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jgrh\" (UniqueName: \"kubernetes.io/projected/fad235e5-556f-4bf4-93c7-f89fb4452401-kube-api-access-7jgrh\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-combined-ca-bundle\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123606 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-run-httpd\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.123651 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-log-httpd\") pod \"fad235e5-556f-4bf4-93c7-f89fb4452401\" (UID: \"fad235e5-556f-4bf4-93c7-f89fb4452401\") " Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.125043 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.130654 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-scripts" (OuterVolumeSpecName: "scripts") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.132691 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.142727 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93","Type":"ContainerStarted","Data":"4625d69aa68312a28f04f0d1c495feb7ce2913dab8eb6f3750d7adae7c31db6f"} Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.149340 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad235e5-556f-4bf4-93c7-f89fb4452401-kube-api-access-7jgrh" (OuterVolumeSpecName: "kube-api-access-7jgrh") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "kube-api-access-7jgrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.163600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.180309 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.075259958 podStartE2EDuration="7.180286446s" podCreationTimestamp="2025-12-06 09:14:15 +0000 UTC" firstStartedPulling="2025-12-06 09:14:16.767538675 +0000 UTC m=+8219.168927545" lastFinishedPulling="2025-12-06 09:14:20.872565163 +0000 UTC m=+8223.273954033" observedRunningTime="2025-12-06 09:14:22.170060212 +0000 UTC m=+8224.571449082" watchObservedRunningTime="2025-12-06 09:14:22.180286446 +0000 UTC m=+8224.581675316" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.225951 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.225993 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fad235e5-556f-4bf4-93c7-f89fb4452401-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.226009 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.226025 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.226039 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jgrh\" (UniqueName: \"kubernetes.io/projected/fad235e5-556f-4bf4-93c7-f89fb4452401-kube-api-access-7jgrh\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.240840 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.292525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-config-data" (OuterVolumeSpecName: "config-data") pod "fad235e5-556f-4bf4-93c7-f89fb4452401" (UID: "fad235e5-556f-4bf4-93c7-f89fb4452401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.322607 4895 scope.go:117] "RemoveContainer" containerID="92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.333095 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.333139 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad235e5-556f-4bf4-93c7-f89fb4452401-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.374276 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-556hd"] Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.374769 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="proxy-httpd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.374789 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="proxy-httpd" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.374807 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="sg-core" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.374815 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="sg-core" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.374850 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-notification-agent" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.374857 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-notification-agent" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.374871 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-central-agent" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.374877 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-central-agent" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.375071 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-central-agent" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.375093 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="proxy-httpd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.375102 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="sg-core" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.375116 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" containerName="ceilometer-notification-agent" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.378131 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.397522 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-556hd"] Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.399054 4895 scope.go:117] "RemoveContainer" containerID="7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.452251 4895 scope.go:117] "RemoveContainer" containerID="996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.484320 4895 scope.go:117] "RemoveContainer" containerID="2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.485946 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd\": container with ID starting with 2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd not found: ID does not exist" containerID="2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.486064 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd"} err="failed to get container status \"2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd\": rpc error: code = NotFound desc = could not find container \"2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd\": container with ID starting with 2e2ef09cc20172de03739a6327e0a7402b3f96c660f082692bdf4261fb28cbbd not found: ID does not exist" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.486144 4895 scope.go:117] "RemoveContainer" containerID="92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.486606 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7\": container with ID starting with 92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7 not found: ID does not exist" containerID="92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.486719 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7"} err="failed to get container status \"92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7\": rpc error: code = NotFound desc = could not find container \"92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7\": container with ID starting with 92ea6cf7ab4b72a699ab9749566c7025feb6ff68b593a0f5377a5ce06c5a9bd7 not found: ID does not exist" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.486811 4895 scope.go:117] "RemoveContainer" containerID="7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.487270 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269\": container with ID starting with 7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269 not found: ID does not exist" containerID="7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.487397 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269"} err="failed to get container status \"7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269\": rpc error: code = NotFound desc = could not find container \"7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269\": container with ID starting with 7a6e8a3a7ae0738f0b5ac259fc3d13f63b265a324550b6602c60c2828d980269 not found: ID does not exist" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.487525 4895 scope.go:117] "RemoveContainer" containerID="996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e" Dec 06 09:14:22 crc kubenswrapper[4895]: E1206 09:14:22.487988 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e\": container with ID starting with 996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e not found: ID does not exist" containerID="996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.488114 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e"} err="failed to get container status \"996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e\": rpc error: code = NotFound desc = could not find container \"996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e\": container with ID starting with 996cf5c9c0a4184b5537ad8c894dd7bc3429f7725792b6338edfe041ddd5ab1e not found: ID does not exist" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.509502 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.534940 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.535640 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.538493 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.539918 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-utilities\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.539966 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-catalog-content\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.540055 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcg9\" (UniqueName: \"kubernetes.io/projected/7dc31459-5954-4e3c-9369-2b4ee6b374ef-kube-api-access-dlcg9\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.541619 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.541864 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.569260 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.641595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-run-httpd\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642053 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlcg9\" (UniqueName: \"kubernetes.io/projected/7dc31459-5954-4e3c-9369-2b4ee6b374ef-kube-api-access-dlcg9\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-config-data\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642170 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzk5\" (UniqueName: \"kubernetes.io/projected/2bd046f7-9ee3-42ef-96e1-89706429d498-kube-api-access-dwzk5\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-utilities\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-catalog-content\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642376 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642399 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642434 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-log-httpd\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.642497 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-scripts\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.643458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-utilities\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.643679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-catalog-content\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.664602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlcg9\" (UniqueName: \"kubernetes.io/projected/7dc31459-5954-4e3c-9369-2b4ee6b374ef-kube-api-access-dlcg9\") pod \"certified-operators-556hd\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.722948 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744592 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744632 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-log-httpd\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-scripts\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744731 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-run-httpd\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744779 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-config-data\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.744797 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzk5\" (UniqueName: \"kubernetes.io/projected/2bd046f7-9ee3-42ef-96e1-89706429d498-kube-api-access-dwzk5\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.745963 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-log-httpd\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.745994 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-run-httpd\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.747735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.748119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.748954 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-scripts\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.751224 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-config-data\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.760060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzk5\" (UniqueName: \"kubernetes.io/projected/2bd046f7-9ee3-42ef-96e1-89706429d498-kube-api-access-dwzk5\") pod \"ceilometer-0\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " pod="openstack/ceilometer-0" Dec 06 09:14:22 crc kubenswrapper[4895]: I1206 09:14:22.872603 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:23 crc kubenswrapper[4895]: I1206 09:14:23.320018 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-556hd"] Dec 06 09:14:23 crc kubenswrapper[4895]: I1206 09:14:23.462805 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:23 crc kubenswrapper[4895]: W1206 09:14:23.469261 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bd046f7_9ee3_42ef_96e1_89706429d498.slice/crio-834d0e6c3bef4c70cc438aa75826fc4308ed9abfbdb1ba11e61c210e7472bdfb WatchSource:0}: Error finding container 834d0e6c3bef4c70cc438aa75826fc4308ed9abfbdb1ba11e61c210e7472bdfb: Status 404 returned error can't find the container with id 834d0e6c3bef4c70cc438aa75826fc4308ed9abfbdb1ba11e61c210e7472bdfb Dec 06 09:14:24 crc kubenswrapper[4895]: I1206 09:14:24.075863 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad235e5-556f-4bf4-93c7-f89fb4452401" path="/var/lib/kubelet/pods/fad235e5-556f-4bf4-93c7-f89fb4452401/volumes" Dec 06 09:14:24 crc kubenswrapper[4895]: I1206 09:14:24.175908 4895 generic.go:334] "Generic (PLEG): container finished" podID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerID="bfeb86598824a158ab5c52462f9b51476a4b9bd875fd247c1924e332d77b002d" exitCode=0 Dec 06 09:14:24 crc kubenswrapper[4895]: I1206 09:14:24.175988 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556hd" event={"ID":"7dc31459-5954-4e3c-9369-2b4ee6b374ef","Type":"ContainerDied","Data":"bfeb86598824a158ab5c52462f9b51476a4b9bd875fd247c1924e332d77b002d"} Dec 06 09:14:24 crc kubenswrapper[4895]: I1206 09:14:24.176029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556hd" event={"ID":"7dc31459-5954-4e3c-9369-2b4ee6b374ef","Type":"ContainerStarted","Data":"546d2fdc65cd5bfae245da1bc7511bcdd7ac979a7979e1f5708aa1b937a0619b"} Dec 06 09:14:24 crc kubenswrapper[4895]: I1206 09:14:24.180088 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerStarted","Data":"751a5f64bc7efb3ac753735e19e7c595901c975f3f8c03293c2cbe0908d56c85"} Dec 06 09:14:24 crc kubenswrapper[4895]: I1206 09:14:24.180128 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerStarted","Data":"834d0e6c3bef4c70cc438aa75826fc4308ed9abfbdb1ba11e61c210e7472bdfb"} Dec 06 09:14:25 crc kubenswrapper[4895]: I1206 09:14:25.205847 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerStarted","Data":"766edb8ad69579d531f8921aa26e9ceeac2a4f85daf561f32697f66f6995eed8"} Dec 06 09:14:26 crc kubenswrapper[4895]: I1206 09:14:26.230785 4895 generic.go:334] "Generic (PLEG): container finished" podID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerID="045e37438a1c87e864735f2d1df648e5372e810229b559d0da973757ed28d0c6" exitCode=0 Dec 06 09:14:26 crc kubenswrapper[4895]: I1206 09:14:26.230977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556hd" event={"ID":"7dc31459-5954-4e3c-9369-2b4ee6b374ef","Type":"ContainerDied","Data":"045e37438a1c87e864735f2d1df648e5372e810229b559d0da973757ed28d0c6"} Dec 06 09:14:26 crc kubenswrapper[4895]: I1206 09:14:26.235259 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerStarted","Data":"2b7abb2d4a890c24a7e26bb3fd968532a18b9cf9db28a6be089a4ce35d5990ad"} Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.247847 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556hd" event={"ID":"7dc31459-5954-4e3c-9369-2b4ee6b374ef","Type":"ContainerStarted","Data":"9a1b7bf57d1a741252a390060cdeba6d633229c1d45984e78cba2b43c0b24327"} Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.251195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerStarted","Data":"8fd41e9d358c6ca38b90739453f7c279685db6bfbb6bec9249ea215fd57401af"} Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.251304 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.274379 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-556hd" podStartSLOduration=2.791008727 podStartE2EDuration="5.274359255s" podCreationTimestamp="2025-12-06 09:14:22 +0000 UTC" firstStartedPulling="2025-12-06 09:14:24.177501655 +0000 UTC m=+8226.578890525" lastFinishedPulling="2025-12-06 09:14:26.660852193 +0000 UTC m=+8229.062241053" observedRunningTime="2025-12-06 09:14:27.268834947 +0000 UTC m=+8229.670223817" watchObservedRunningTime="2025-12-06 09:14:27.274359255 +0000 UTC m=+8229.675748125" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.356570 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.386097828 podStartE2EDuration="5.356548123s" podCreationTimestamp="2025-12-06 09:14:22 +0000 UTC" firstStartedPulling="2025-12-06 09:14:23.473561572 +0000 UTC m=+8225.874950432" lastFinishedPulling="2025-12-06 09:14:26.444011857 +0000 UTC m=+8228.845400727" observedRunningTime="2025-12-06 09:14:27.301388382 +0000 UTC m=+8229.702777262" watchObservedRunningTime="2025-12-06 09:14:27.356548123 +0000 UTC m=+8229.757936993" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.359272 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-bjkbd"] Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.360560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.374228 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-bjkbd"] Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.455814 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613023d1-135a-4468-8886-71659c103c60-operator-scripts\") pod \"manila-db-create-bjkbd\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.455879 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhzz\" (UniqueName: \"kubernetes.io/projected/613023d1-135a-4468-8886-71659c103c60-kube-api-access-xhhzz\") pod \"manila-db-create-bjkbd\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.466046 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-3e90-account-create-update-wsvc8"] Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.481280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.485299 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.519576 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3e90-account-create-update-wsvc8"] Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.559845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613023d1-135a-4468-8886-71659c103c60-operator-scripts\") pod \"manila-db-create-bjkbd\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.559936 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhzz\" (UniqueName: \"kubernetes.io/projected/613023d1-135a-4468-8886-71659c103c60-kube-api-access-xhhzz\") pod \"manila-db-create-bjkbd\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.560794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613023d1-135a-4468-8886-71659c103c60-operator-scripts\") pod \"manila-db-create-bjkbd\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.584300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhzz\" (UniqueName: \"kubernetes.io/projected/613023d1-135a-4468-8886-71659c103c60-kube-api-access-xhhzz\") pod \"manila-db-create-bjkbd\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.661708 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de09b98-9134-4897-b57e-f726979ac670-operator-scripts\") pod \"manila-3e90-account-create-update-wsvc8\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.662144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpwh\" (UniqueName: \"kubernetes.io/projected/3de09b98-9134-4897-b57e-f726979ac670-kube-api-access-xkpwh\") pod \"manila-3e90-account-create-update-wsvc8\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.676072 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.764075 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpwh\" (UniqueName: \"kubernetes.io/projected/3de09b98-9134-4897-b57e-f726979ac670-kube-api-access-xkpwh\") pod \"manila-3e90-account-create-update-wsvc8\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.764225 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de09b98-9134-4897-b57e-f726979ac670-operator-scripts\") pod \"manila-3e90-account-create-update-wsvc8\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.765237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de09b98-9134-4897-b57e-f726979ac670-operator-scripts\") pod \"manila-3e90-account-create-update-wsvc8\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.789436 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpwh\" (UniqueName: \"kubernetes.io/projected/3de09b98-9134-4897-b57e-f726979ac670-kube-api-access-xkpwh\") pod \"manila-3e90-account-create-update-wsvc8\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:27 crc kubenswrapper[4895]: I1206 09:14:27.815113 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:28 crc kubenswrapper[4895]: I1206 09:14:28.200025 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-bjkbd"] Dec 06 09:14:28 crc kubenswrapper[4895]: W1206 09:14:28.205568 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod613023d1_135a_4468_8886_71659c103c60.slice/crio-aa29090f9f7fcfe6fd8b853272d9fa5dadd0037c8484095bc22e4ce39c86fc7a WatchSource:0}: Error finding container aa29090f9f7fcfe6fd8b853272d9fa5dadd0037c8484095bc22e4ce39c86fc7a: Status 404 returned error can't find the container with id aa29090f9f7fcfe6fd8b853272d9fa5dadd0037c8484095bc22e4ce39c86fc7a Dec 06 09:14:28 crc kubenswrapper[4895]: I1206 09:14:28.274625 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-bjkbd" event={"ID":"613023d1-135a-4468-8886-71659c103c60","Type":"ContainerStarted","Data":"aa29090f9f7fcfe6fd8b853272d9fa5dadd0037c8484095bc22e4ce39c86fc7a"} Dec 06 09:14:28 crc kubenswrapper[4895]: I1206 09:14:28.479819 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3e90-account-create-update-wsvc8"] Dec 06 09:14:29 crc kubenswrapper[4895]: I1206 09:14:29.050844 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:14:29 crc kubenswrapper[4895]: E1206 09:14:29.051368 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:14:29 crc kubenswrapper[4895]: I1206 09:14:29.284270 4895 generic.go:334] "Generic (PLEG): container finished" podID="613023d1-135a-4468-8886-71659c103c60" containerID="9f8fe9a4209f618704bc615bc1a72ff886f2037aa438113b8c4afbb4f00fd73b" exitCode=0 Dec 06 09:14:29 crc kubenswrapper[4895]: I1206 09:14:29.284379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-bjkbd" event={"ID":"613023d1-135a-4468-8886-71659c103c60","Type":"ContainerDied","Data":"9f8fe9a4209f618704bc615bc1a72ff886f2037aa438113b8c4afbb4f00fd73b"} Dec 06 09:14:29 crc kubenswrapper[4895]: I1206 09:14:29.286572 4895 generic.go:334] "Generic (PLEG): container finished" podID="3de09b98-9134-4897-b57e-f726979ac670" containerID="151687d148543088c98ac569bd4bed6f23a765c7af43108b1b702526deab1724" exitCode=0 Dec 06 09:14:29 crc kubenswrapper[4895]: I1206 09:14:29.286609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3e90-account-create-update-wsvc8" event={"ID":"3de09b98-9134-4897-b57e-f726979ac670","Type":"ContainerDied","Data":"151687d148543088c98ac569bd4bed6f23a765c7af43108b1b702526deab1724"} Dec 06 09:14:29 crc kubenswrapper[4895]: I1206 09:14:29.286629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3e90-account-create-update-wsvc8" event={"ID":"3de09b98-9134-4897-b57e-f726979ac670","Type":"ContainerStarted","Data":"703d24a72f357a49fd8206c7410772698e13b3afbeea97e6ebbbe56385d40c0d"} Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.877002 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.887756 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.954802 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhzz\" (UniqueName: \"kubernetes.io/projected/613023d1-135a-4468-8886-71659c103c60-kube-api-access-xhhzz\") pod \"613023d1-135a-4468-8886-71659c103c60\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.955032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613023d1-135a-4468-8886-71659c103c60-operator-scripts\") pod \"613023d1-135a-4468-8886-71659c103c60\" (UID: \"613023d1-135a-4468-8886-71659c103c60\") " Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.955055 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de09b98-9134-4897-b57e-f726979ac670-operator-scripts\") pod \"3de09b98-9134-4897-b57e-f726979ac670\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.955294 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkpwh\" (UniqueName: \"kubernetes.io/projected/3de09b98-9134-4897-b57e-f726979ac670-kube-api-access-xkpwh\") pod \"3de09b98-9134-4897-b57e-f726979ac670\" (UID: \"3de09b98-9134-4897-b57e-f726979ac670\") " Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.955902 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de09b98-9134-4897-b57e-f726979ac670-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3de09b98-9134-4897-b57e-f726979ac670" (UID: "3de09b98-9134-4897-b57e-f726979ac670"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.955901 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613023d1-135a-4468-8886-71659c103c60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "613023d1-135a-4468-8886-71659c103c60" (UID: "613023d1-135a-4468-8886-71659c103c60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.961729 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613023d1-135a-4468-8886-71659c103c60-kube-api-access-xhhzz" (OuterVolumeSpecName: "kube-api-access-xhhzz") pod "613023d1-135a-4468-8886-71659c103c60" (UID: "613023d1-135a-4468-8886-71659c103c60"). InnerVolumeSpecName "kube-api-access-xhhzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:30 crc kubenswrapper[4895]: I1206 09:14:30.962260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de09b98-9134-4897-b57e-f726979ac670-kube-api-access-xkpwh" (OuterVolumeSpecName: "kube-api-access-xkpwh") pod "3de09b98-9134-4897-b57e-f726979ac670" (UID: "3de09b98-9134-4897-b57e-f726979ac670"). InnerVolumeSpecName "kube-api-access-xkpwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.045727 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5f9g5"] Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.057432 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkpwh\" (UniqueName: \"kubernetes.io/projected/3de09b98-9134-4897-b57e-f726979ac670-kube-api-access-xkpwh\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.057464 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhzz\" (UniqueName: \"kubernetes.io/projected/613023d1-135a-4468-8886-71659c103c60-kube-api-access-xhhzz\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.057491 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613023d1-135a-4468-8886-71659c103c60-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.057501 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de09b98-9134-4897-b57e-f726979ac670-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.058359 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9c44-account-create-update-jkmz4"] Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.072554 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5f9g5"] Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.080901 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9c44-account-create-update-jkmz4"] Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.307358 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-bjkbd" event={"ID":"613023d1-135a-4468-8886-71659c103c60","Type":"ContainerDied","Data":"aa29090f9f7fcfe6fd8b853272d9fa5dadd0037c8484095bc22e4ce39c86fc7a"} Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.307413 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa29090f9f7fcfe6fd8b853272d9fa5dadd0037c8484095bc22e4ce39c86fc7a" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.307380 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bjkbd" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.309027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3e90-account-create-update-wsvc8" event={"ID":"3de09b98-9134-4897-b57e-f726979ac670","Type":"ContainerDied","Data":"703d24a72f357a49fd8206c7410772698e13b3afbeea97e6ebbbe56385d40c0d"} Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.309068 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="703d24a72f357a49fd8206c7410772698e13b3afbeea97e6ebbbe56385d40c0d" Dec 06 09:14:31 crc kubenswrapper[4895]: I1206 09:14:31.309104 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3e90-account-create-update-wsvc8" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.064561 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4976cbe1-e91d-49d0-9901-d998b029337c" path="/var/lib/kubelet/pods/4976cbe1-e91d-49d0-9901-d998b029337c/volumes" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.065698 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e82a30-45be-425a-ab4e-19491702a3d3" path="/var/lib/kubelet/pods/92e82a30-45be-425a-ab4e-19491702a3d3/volumes" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.724258 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.724310 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.772994 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.874120 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-9pczv"] Dec 06 09:14:32 crc kubenswrapper[4895]: E1206 09:14:32.874737 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613023d1-135a-4468-8886-71659c103c60" containerName="mariadb-database-create" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.874801 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="613023d1-135a-4468-8886-71659c103c60" containerName="mariadb-database-create" Dec 06 09:14:32 crc kubenswrapper[4895]: E1206 09:14:32.874907 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de09b98-9134-4897-b57e-f726979ac670" containerName="mariadb-account-create-update" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.874963 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de09b98-9134-4897-b57e-f726979ac670" containerName="mariadb-account-create-update" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.875197 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de09b98-9134-4897-b57e-f726979ac670" containerName="mariadb-account-create-update" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.875261 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="613023d1-135a-4468-8886-71659c103c60" containerName="mariadb-database-create" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.876001 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.878647 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.879278 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p8t59" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.898589 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-9pczv"] Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.927068 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-combined-ca-bundle\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.927110 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zlm\" (UniqueName: \"kubernetes.io/projected/f2bba439-5a49-4c4c-919b-d190d062fe1e-kube-api-access-t9zlm\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.927169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-job-config-data\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:32 crc kubenswrapper[4895]: I1206 09:14:32.927405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-config-data\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.029802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-config-data\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.029905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-combined-ca-bundle\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.029929 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9zlm\" (UniqueName: \"kubernetes.io/projected/f2bba439-5a49-4c4c-919b-d190d062fe1e-kube-api-access-t9zlm\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.029958 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-job-config-data\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.035262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-combined-ca-bundle\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.036067 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-config-data\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.036945 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-job-config-data\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.050718 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9zlm\" (UniqueName: \"kubernetes.io/projected/f2bba439-5a49-4c4c-919b-d190d062fe1e-kube-api-access-t9zlm\") pod \"manila-db-sync-9pczv\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.197037 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.408239 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.472223 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-556hd"] Dec 06 09:14:33 crc kubenswrapper[4895]: I1206 09:14:33.796721 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-9pczv"] Dec 06 09:14:34 crc kubenswrapper[4895]: I1206 09:14:34.355363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9pczv" event={"ID":"f2bba439-5a49-4c4c-919b-d190d062fe1e","Type":"ContainerStarted","Data":"ee94f9d037f210739ebfb5b6447099afaf19bb72033668f877f5587aa4cf7e66"} Dec 06 09:14:35 crc kubenswrapper[4895]: I1206 09:14:35.365956 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-556hd" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="registry-server" containerID="cri-o://9a1b7bf57d1a741252a390060cdeba6d633229c1d45984e78cba2b43c0b24327" gracePeriod=2 Dec 06 09:14:36 crc kubenswrapper[4895]: I1206 09:14:36.382750 4895 generic.go:334] "Generic (PLEG): container finished" podID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerID="9a1b7bf57d1a741252a390060cdeba6d633229c1d45984e78cba2b43c0b24327" exitCode=0 Dec 06 09:14:36 crc kubenswrapper[4895]: I1206 09:14:36.382973 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556hd" event={"ID":"7dc31459-5954-4e3c-9369-2b4ee6b374ef","Type":"ContainerDied","Data":"9a1b7bf57d1a741252a390060cdeba6d633229c1d45984e78cba2b43c0b24327"} Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.077199 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.142144 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-catalog-content\") pod \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.142202 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlcg9\" (UniqueName: \"kubernetes.io/projected/7dc31459-5954-4e3c-9369-2b4ee6b374ef-kube-api-access-dlcg9\") pod \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.142298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-utilities\") pod \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\" (UID: \"7dc31459-5954-4e3c-9369-2b4ee6b374ef\") " Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.144651 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-utilities" (OuterVolumeSpecName: "utilities") pod "7dc31459-5954-4e3c-9369-2b4ee6b374ef" (UID: "7dc31459-5954-4e3c-9369-2b4ee6b374ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.152809 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc31459-5954-4e3c-9369-2b4ee6b374ef-kube-api-access-dlcg9" (OuterVolumeSpecName: "kube-api-access-dlcg9") pod "7dc31459-5954-4e3c-9369-2b4ee6b374ef" (UID: "7dc31459-5954-4e3c-9369-2b4ee6b374ef"). InnerVolumeSpecName "kube-api-access-dlcg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.222804 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dc31459-5954-4e3c-9369-2b4ee6b374ef" (UID: "7dc31459-5954-4e3c-9369-2b4ee6b374ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.244144 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlcg9\" (UniqueName: \"kubernetes.io/projected/7dc31459-5954-4e3c-9369-2b4ee6b374ef-kube-api-access-dlcg9\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.244173 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.244184 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc31459-5954-4e3c-9369-2b4ee6b374ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.406558 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556hd" event={"ID":"7dc31459-5954-4e3c-9369-2b4ee6b374ef","Type":"ContainerDied","Data":"546d2fdc65cd5bfae245da1bc7511bcdd7ac979a7979e1f5708aa1b937a0619b"} Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.406617 4895 scope.go:117] "RemoveContainer" containerID="9a1b7bf57d1a741252a390060cdeba6d633229c1d45984e78cba2b43c0b24327" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.406775 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556hd" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.444383 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-556hd"] Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.455971 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-556hd"] Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.529770 4895 scope.go:117] "RemoveContainer" containerID="045e37438a1c87e864735f2d1df648e5372e810229b559d0da973757ed28d0c6" Dec 06 09:14:38 crc kubenswrapper[4895]: I1206 09:14:38.578873 4895 scope.go:117] "RemoveContainer" containerID="bfeb86598824a158ab5c52462f9b51476a4b9bd875fd247c1924e332d77b002d" Dec 06 09:14:39 crc kubenswrapper[4895]: I1206 09:14:39.421440 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9pczv" event={"ID":"f2bba439-5a49-4c4c-919b-d190d062fe1e","Type":"ContainerStarted","Data":"6604e7b5e8f3874be5f4b4dcb897b3208fc631da7abe023e674d36f2c6270ed8"} Dec 06 09:14:39 crc kubenswrapper[4895]: I1206 09:14:39.447451 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-9pczv" podStartSLOduration=2.6599898509999997 podStartE2EDuration="7.447429562s" podCreationTimestamp="2025-12-06 09:14:32 +0000 UTC" firstStartedPulling="2025-12-06 09:14:33.810722704 +0000 UTC m=+8236.212111574" lastFinishedPulling="2025-12-06 09:14:38.598162415 +0000 UTC m=+8240.999551285" observedRunningTime="2025-12-06 09:14:39.442673603 +0000 UTC m=+8241.844062483" watchObservedRunningTime="2025-12-06 09:14:39.447429562 +0000 UTC m=+8241.848818432" Dec 06 09:14:40 crc kubenswrapper[4895]: I1206 09:14:40.066668 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" path="/var/lib/kubelet/pods/7dc31459-5954-4e3c-9369-2b4ee6b374ef/volumes" Dec 06 09:14:41 crc kubenswrapper[4895]: I1206 09:14:41.082423 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-g69zh"] Dec 06 09:14:41 crc kubenswrapper[4895]: I1206 09:14:41.100835 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-g69zh"] Dec 06 09:14:41 crc kubenswrapper[4895]: I1206 09:14:41.445933 4895 generic.go:334] "Generic (PLEG): container finished" podID="f2bba439-5a49-4c4c-919b-d190d062fe1e" containerID="6604e7b5e8f3874be5f4b4dcb897b3208fc631da7abe023e674d36f2c6270ed8" exitCode=0 Dec 06 09:14:41 crc kubenswrapper[4895]: I1206 09:14:41.445977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9pczv" event={"ID":"f2bba439-5a49-4c4c-919b-d190d062fe1e","Type":"ContainerDied","Data":"6604e7b5e8f3874be5f4b4dcb897b3208fc631da7abe023e674d36f2c6270ed8"} Dec 06 09:14:42 crc kubenswrapper[4895]: I1206 09:14:42.064200 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75653761-0f58-43f8-a412-b84731fcb7d6" path="/var/lib/kubelet/pods/75653761-0f58-43f8-a412-b84731fcb7d6/volumes" Dec 06 09:14:42 crc kubenswrapper[4895]: I1206 09:14:42.929206 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.051438 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9zlm\" (UniqueName: \"kubernetes.io/projected/f2bba439-5a49-4c4c-919b-d190d062fe1e-kube-api-access-t9zlm\") pod \"f2bba439-5a49-4c4c-919b-d190d062fe1e\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.051530 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-config-data\") pod \"f2bba439-5a49-4c4c-919b-d190d062fe1e\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.051714 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-job-config-data\") pod \"f2bba439-5a49-4c4c-919b-d190d062fe1e\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.051834 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-combined-ca-bundle\") pod \"f2bba439-5a49-4c4c-919b-d190d062fe1e\" (UID: \"f2bba439-5a49-4c4c-919b-d190d062fe1e\") " Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.056837 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f2bba439-5a49-4c4c-919b-d190d062fe1e" (UID: "f2bba439-5a49-4c4c-919b-d190d062fe1e"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.057788 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bba439-5a49-4c4c-919b-d190d062fe1e-kube-api-access-t9zlm" (OuterVolumeSpecName: "kube-api-access-t9zlm") pod "f2bba439-5a49-4c4c-919b-d190d062fe1e" (UID: "f2bba439-5a49-4c4c-919b-d190d062fe1e"). InnerVolumeSpecName "kube-api-access-t9zlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.060317 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-config-data" (OuterVolumeSpecName: "config-data") pod "f2bba439-5a49-4c4c-919b-d190d062fe1e" (UID: "f2bba439-5a49-4c4c-919b-d190d062fe1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.083955 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bba439-5a49-4c4c-919b-d190d062fe1e" (UID: "f2bba439-5a49-4c4c-919b-d190d062fe1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.154948 4895 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-job-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.154994 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.155008 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9zlm\" (UniqueName: \"kubernetes.io/projected/f2bba439-5a49-4c4c-919b-d190d062fe1e-kube-api-access-t9zlm\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.155023 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bba439-5a49-4c4c-919b-d190d062fe1e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.467512 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9pczv" event={"ID":"f2bba439-5a49-4c4c-919b-d190d062fe1e","Type":"ContainerDied","Data":"ee94f9d037f210739ebfb5b6447099afaf19bb72033668f877f5587aa4cf7e66"} Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.469111 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee94f9d037f210739ebfb5b6447099afaf19bb72033668f877f5587aa4cf7e66" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.467595 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9pczv" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.877718 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 09:14:43 crc kubenswrapper[4895]: E1206 09:14:43.878185 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="extract-utilities" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.878204 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="extract-utilities" Dec 06 09:14:43 crc kubenswrapper[4895]: E1206 09:14:43.878214 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="extract-content" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.878221 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="extract-content" Dec 06 09:14:43 crc kubenswrapper[4895]: E1206 09:14:43.878261 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bba439-5a49-4c4c-919b-d190d062fe1e" containerName="manila-db-sync" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.878268 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bba439-5a49-4c4c-919b-d190d062fe1e" containerName="manila-db-sync" Dec 06 09:14:43 crc kubenswrapper[4895]: E1206 09:14:43.878285 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="registry-server" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.878292 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="registry-server" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.878502 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bba439-5a49-4c4c-919b-d190d062fe1e" containerName="manila-db-sync" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.878526 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc31459-5954-4e3c-9369-2b4ee6b374ef" containerName="registry-server" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.879587 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.885449 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p8t59" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.885645 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.885904 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.886059 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.895540 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.977416 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnsh\" (UniqueName: \"kubernetes.io/projected/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-kube-api-access-vjnsh\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.977768 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.977844 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.977932 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.977973 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-config-data\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:43 crc kubenswrapper[4895]: I1206 09:14:43.977998 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-scripts\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.027645 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.032038 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.038725 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.075946 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:14:44 crc kubenswrapper[4895]: E1206 09:14:44.076200 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079281 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079355 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079376 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-scripts\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079404 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079444 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf8e2a3-dacb-435a-8869-fcd5949b6299-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079493 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079516 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-config-data\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079513 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079531 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-scripts\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079620 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxzr\" (UniqueName: \"kubernetes.io/projected/7cf8e2a3-dacb-435a-8869-fcd5949b6299-kube-api-access-llxzr\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079741 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079808 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjnsh\" (UniqueName: \"kubernetes.io/projected/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-kube-api-access-vjnsh\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.079959 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cf8e2a3-dacb-435a-8869-fcd5949b6299-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.080010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cf8e2a3-dacb-435a-8869-fcd5949b6299-ceph\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.080034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-config-data\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.090272 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.097258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.097342 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.099331 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-scripts\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.113699 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjnsh\" (UniqueName: \"kubernetes.io/projected/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-kube-api-access-vjnsh\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.113762 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9f6fd977-kvrq4"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.114037 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742d4a1b-1ac4-46ad-9cff-53f42c45e3f5-config-data\") pod \"manila-scheduler-0\" (UID: \"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5\") " pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.116258 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.156677 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9f6fd977-kvrq4"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183593 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thtb\" (UniqueName: \"kubernetes.io/projected/9751d574-27fb-414c-bfc1-c3c22ddf675d-kube-api-access-4thtb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183666 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183716 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183759 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-dns-svc\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183788 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-config\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cf8e2a3-dacb-435a-8869-fcd5949b6299-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183832 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cf8e2a3-dacb-435a-8869-fcd5949b6299-ceph\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-config-data\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183882 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183912 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-scripts\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183930 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183960 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf8e2a3-dacb-435a-8869-fcd5949b6299-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.183996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxzr\" (UniqueName: \"kubernetes.io/projected/7cf8e2a3-dacb-435a-8869-fcd5949b6299-kube-api-access-llxzr\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.184431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cf8e2a3-dacb-435a-8869-fcd5949b6299-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.185754 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf8e2a3-dacb-435a-8869-fcd5949b6299-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.188657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.189003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cf8e2a3-dacb-435a-8869-fcd5949b6299-ceph\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.194165 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-config-data\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.195570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.200916 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf8e2a3-dacb-435a-8869-fcd5949b6299-scripts\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.208993 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.209796 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxzr\" (UniqueName: \"kubernetes.io/projected/7cf8e2a3-dacb-435a-8869-fcd5949b6299-kube-api-access-llxzr\") pod \"manila-share-share1-0\" (UID: \"7cf8e2a3-dacb-435a-8869-fcd5949b6299\") " pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.252047 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.253991 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.255857 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.263612 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286289 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f19d971-38ba-4e49-a099-6b657324d62e-etc-machine-id\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286354 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286374 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f19d971-38ba-4e49-a099-6b657324d62e-logs\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286542 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-scripts\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286594 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzfd\" (UniqueName: \"kubernetes.io/projected/7f19d971-38ba-4e49-a099-6b657324d62e-kube-api-access-jhzfd\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286647 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thtb\" (UniqueName: \"kubernetes.io/projected/9751d574-27fb-414c-bfc1-c3c22ddf675d-kube-api-access-4thtb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286783 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-config-data\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286823 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-dns-svc\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.286973 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-config\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.287074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-config-data-custom\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.289034 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.289051 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.289811 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-config\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.289827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-dns-svc\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.309287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thtb\" (UniqueName: \"kubernetes.io/projected/9751d574-27fb-414c-bfc1-c3c22ddf675d-kube-api-access-4thtb\") pod \"dnsmasq-dns-6c9f6fd977-kvrq4\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.342262 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.381632 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.389732 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-scripts\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.389790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhzfd\" (UniqueName: \"kubernetes.io/projected/7f19d971-38ba-4e49-a099-6b657324d62e-kube-api-access-jhzfd\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.389841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.389885 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-config-data\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.390013 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-config-data-custom\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.390062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f19d971-38ba-4e49-a099-6b657324d62e-etc-machine-id\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.390118 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f19d971-38ba-4e49-a099-6b657324d62e-logs\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.390617 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f19d971-38ba-4e49-a099-6b657324d62e-logs\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.394279 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-scripts\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.395956 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.396024 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f19d971-38ba-4e49-a099-6b657324d62e-etc-machine-id\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.401437 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-config-data\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.406349 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f19d971-38ba-4e49-a099-6b657324d62e-config-data-custom\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.413115 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhzfd\" (UniqueName: \"kubernetes.io/projected/7f19d971-38ba-4e49-a099-6b657324d62e-kube-api-access-jhzfd\") pod \"manila-api-0\" (UID: \"7f19d971-38ba-4e49-a099-6b657324d62e\") " pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.667271 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.739802 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 09:14:44 crc kubenswrapper[4895]: I1206 09:14:44.971563 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9f6fd977-kvrq4"] Dec 06 09:14:44 crc kubenswrapper[4895]: W1206 09:14:44.997283 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9751d574_27fb_414c_bfc1_c3c22ddf675d.slice/crio-1abcc609a2dcb39857d79c445b77c5981f60252cc2e7b96baf6be8c5c58662a0 WatchSource:0}: Error finding container 1abcc609a2dcb39857d79c445b77c5981f60252cc2e7b96baf6be8c5c58662a0: Status 404 returned error can't find the container with id 1abcc609a2dcb39857d79c445b77c5981f60252cc2e7b96baf6be8c5c58662a0 Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.041593 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.300906 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.528588 4895 generic.go:334] "Generic (PLEG): container finished" podID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerID="4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3" exitCode=0 Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.528674 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" event={"ID":"9751d574-27fb-414c-bfc1-c3c22ddf675d","Type":"ContainerDied","Data":"4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3"} Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.528708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" event={"ID":"9751d574-27fb-414c-bfc1-c3c22ddf675d","Type":"ContainerStarted","Data":"1abcc609a2dcb39857d79c445b77c5981f60252cc2e7b96baf6be8c5c58662a0"} Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.530310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7f19d971-38ba-4e49-a099-6b657324d62e","Type":"ContainerStarted","Data":"9fd76394e7102c3fa31eaa1c9430dedc83bbd2fe01f42a502db5f42185b64621"} Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.542724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5","Type":"ContainerStarted","Data":"95ccff5654dd2a868b7fee366521531c56426a09c0088550de128d4d41fd4c51"} Dec 06 09:14:45 crc kubenswrapper[4895]: I1206 09:14:45.552601 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cf8e2a3-dacb-435a-8869-fcd5949b6299","Type":"ContainerStarted","Data":"6be7326ebbcc1c6b8f0c301d9466dd1f326ea0fcd978b216c34fd7f7ab9c885f"} Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.573750 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5","Type":"ContainerStarted","Data":"fa066bb3f66886ea0e35964b6e9fc32a58532fa891187a17a2fd62e82a1b2f89"} Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.574461 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"742d4a1b-1ac4-46ad-9cff-53f42c45e3f5","Type":"ContainerStarted","Data":"3c7ffe5d2b1e1f871e5b404f9121bdf8d0931a1fb4a2bdff6a32e94b7751bfc7"} Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.581322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" event={"ID":"9751d574-27fb-414c-bfc1-c3c22ddf675d","Type":"ContainerStarted","Data":"64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83"} Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.581521 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.586378 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7f19d971-38ba-4e49-a099-6b657324d62e","Type":"ContainerStarted","Data":"2acbc8c9063aa1aeea609433979bf760485098a07706d673c04f3913256913b6"} Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.587425 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.614461 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.33188906 podStartE2EDuration="3.614438842s" podCreationTimestamp="2025-12-06 09:14:43 +0000 UTC" firstStartedPulling="2025-12-06 09:14:44.763271868 +0000 UTC m=+8247.164660738" lastFinishedPulling="2025-12-06 09:14:45.04582164 +0000 UTC m=+8247.447210520" observedRunningTime="2025-12-06 09:14:46.592865093 +0000 UTC m=+8248.994253983" watchObservedRunningTime="2025-12-06 09:14:46.614438842 +0000 UTC m=+8249.015827722" Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.643574 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" podStartSLOduration=2.643549014 podStartE2EDuration="2.643549014s" podCreationTimestamp="2025-12-06 09:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:14:46.611454812 +0000 UTC m=+8249.012843702" watchObservedRunningTime="2025-12-06 09:14:46.643549014 +0000 UTC m=+8249.044937884" Dec 06 09:14:46 crc kubenswrapper[4895]: I1206 09:14:46.651298 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.6512732420000003 podStartE2EDuration="2.651273242s" podCreationTimestamp="2025-12-06 09:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:14:46.631656756 +0000 UTC m=+8249.033045646" watchObservedRunningTime="2025-12-06 09:14:46.651273242 +0000 UTC m=+8249.052662112" Dec 06 09:14:47 crc kubenswrapper[4895]: I1206 09:14:47.612993 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7f19d971-38ba-4e49-a099-6b657324d62e","Type":"ContainerStarted","Data":"9dd639c1ff48552d5b90e6afc61d1e5cdbdcfb6a34d8a2c5fb0bb7f648f90f27"} Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.004081 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.004900 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="sg-core" containerID="cri-o://2b7abb2d4a890c24a7e26bb3fd968532a18b9cf9db28a6be089a4ce35d5990ad" gracePeriod=30 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.004927 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="proxy-httpd" containerID="cri-o://8fd41e9d358c6ca38b90739453f7c279685db6bfbb6bec9249ea215fd57401af" gracePeriod=30 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.004960 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-notification-agent" containerID="cri-o://766edb8ad69579d531f8921aa26e9ceeac2a4f85daf561f32697f66f6995eed8" gracePeriod=30 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.004863 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-central-agent" containerID="cri-o://751a5f64bc7efb3ac753735e19e7c595901c975f3f8c03293c2cbe0908d56c85" gracePeriod=30 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.017840 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.138:3000/\": EOF" Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.647157 4895 generic.go:334] "Generic (PLEG): container finished" podID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerID="8fd41e9d358c6ca38b90739453f7c279685db6bfbb6bec9249ea215fd57401af" exitCode=0 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.647552 4895 generic.go:334] "Generic (PLEG): container finished" podID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerID="2b7abb2d4a890c24a7e26bb3fd968532a18b9cf9db28a6be089a4ce35d5990ad" exitCode=2 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.647567 4895 generic.go:334] "Generic (PLEG): container finished" podID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerID="751a5f64bc7efb3ac753735e19e7c595901c975f3f8c03293c2cbe0908d56c85" exitCode=0 Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.647242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerDied","Data":"8fd41e9d358c6ca38b90739453f7c279685db6bfbb6bec9249ea215fd57401af"} Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.647607 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerDied","Data":"2b7abb2d4a890c24a7e26bb3fd968532a18b9cf9db28a6be089a4ce35d5990ad"} Dec 06 09:14:49 crc kubenswrapper[4895]: I1206 09:14:49.647622 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerDied","Data":"751a5f64bc7efb3ac753735e19e7c595901c975f3f8c03293c2cbe0908d56c85"} Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.681322 4895 generic.go:334] "Generic (PLEG): container finished" podID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerID="766edb8ad69579d531f8921aa26e9ceeac2a4f85daf561f32697f66f6995eed8" exitCode=0 Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.681406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerDied","Data":"766edb8ad69579d531f8921aa26e9ceeac2a4f85daf561f32697f66f6995eed8"} Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.824627 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916107 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwzk5\" (UniqueName: \"kubernetes.io/projected/2bd046f7-9ee3-42ef-96e1-89706429d498-kube-api-access-dwzk5\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-run-httpd\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-combined-ca-bundle\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-config-data\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916554 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-log-httpd\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916590 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-scripts\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.916623 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-sg-core-conf-yaml\") pod \"2bd046f7-9ee3-42ef-96e1-89706429d498\" (UID: \"2bd046f7-9ee3-42ef-96e1-89706429d498\") " Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.918346 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.918868 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.924614 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-scripts" (OuterVolumeSpecName: "scripts") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.924778 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd046f7-9ee3-42ef-96e1-89706429d498-kube-api-access-dwzk5" (OuterVolumeSpecName: "kube-api-access-dwzk5") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "kube-api-access-dwzk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.956037 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:52 crc kubenswrapper[4895]: I1206 09:14:52.991924 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.018871 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwzk5\" (UniqueName: \"kubernetes.io/projected/2bd046f7-9ee3-42ef-96e1-89706429d498-kube-api-access-dwzk5\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.018900 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.018913 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.018921 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bd046f7-9ee3-42ef-96e1-89706429d498-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.018930 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.018940 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.022822 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-config-data" (OuterVolumeSpecName: "config-data") pod "2bd046f7-9ee3-42ef-96e1-89706429d498" (UID: "2bd046f7-9ee3-42ef-96e1-89706429d498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.121120 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd046f7-9ee3-42ef-96e1-89706429d498-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.694725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bd046f7-9ee3-42ef-96e1-89706429d498","Type":"ContainerDied","Data":"834d0e6c3bef4c70cc438aa75826fc4308ed9abfbdb1ba11e61c210e7472bdfb"} Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.694750 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.695081 4895 scope.go:117] "RemoveContainer" containerID="8fd41e9d358c6ca38b90739453f7c279685db6bfbb6bec9249ea215fd57401af" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.698669 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cf8e2a3-dacb-435a-8869-fcd5949b6299","Type":"ContainerStarted","Data":"11d5beeec5b7ba0963557dae0aa6bf3f6b17a9cac63c90e8bd73e79bc6772987"} Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.698717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cf8e2a3-dacb-435a-8869-fcd5949b6299","Type":"ContainerStarted","Data":"d12d6b2cf5cc61337cc15a4dbeab91ee2ac1c7e8035e937e0ebac031af1404e3"} Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.718994 4895 scope.go:117] "RemoveContainer" containerID="2b7abb2d4a890c24a7e26bb3fd968532a18b9cf9db28a6be089a4ce35d5990ad" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.731170 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.297066196 podStartE2EDuration="10.731147653s" podCreationTimestamp="2025-12-06 09:14:43 +0000 UTC" firstStartedPulling="2025-12-06 09:14:45.043348663 +0000 UTC m=+8247.444737533" lastFinishedPulling="2025-12-06 09:14:52.47743012 +0000 UTC m=+8254.878818990" observedRunningTime="2025-12-06 09:14:53.72545801 +0000 UTC m=+8256.126846900" watchObservedRunningTime="2025-12-06 09:14:53.731147653 +0000 UTC m=+8256.132536523" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.754328 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.758691 4895 scope.go:117] "RemoveContainer" containerID="766edb8ad69579d531f8921aa26e9ceeac2a4f85daf561f32697f66f6995eed8" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.777438 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.796214 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:53 crc kubenswrapper[4895]: E1206 09:14:53.796871 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-central-agent" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.796893 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-central-agent" Dec 06 09:14:53 crc kubenswrapper[4895]: E1206 09:14:53.796933 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="proxy-httpd" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.796941 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="proxy-httpd" Dec 06 09:14:53 crc kubenswrapper[4895]: E1206 09:14:53.796951 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="sg-core" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.796957 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="sg-core" Dec 06 09:14:53 crc kubenswrapper[4895]: E1206 09:14:53.796981 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-notification-agent" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.796987 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-notification-agent" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.797196 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-central-agent" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.797220 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="sg-core" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.797233 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="proxy-httpd" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.797247 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" containerName="ceilometer-notification-agent" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.801312 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.804956 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.805287 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.810268 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.820563 4895 scope.go:117] "RemoveContainer" containerID="751a5f64bc7efb3ac753735e19e7c595901c975f3f8c03293c2cbe0908d56c85" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940155 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-run-httpd\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-log-httpd\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940271 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-scripts\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940288 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-config-data\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940386 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlhq\" (UniqueName: \"kubernetes.io/projected/190342c5-3981-449c-b18e-d6c50a550d0f-kube-api-access-fdlhq\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:53 crc kubenswrapper[4895]: I1206 09:14:53.940409 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-run-httpd\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-log-httpd\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042149 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-scripts\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042163 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-config-data\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042266 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlhq\" (UniqueName: \"kubernetes.io/projected/190342c5-3981-449c-b18e-d6c50a550d0f-kube-api-access-fdlhq\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.042290 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.043290 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-run-httpd\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.043543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-log-httpd\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.048355 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-scripts\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.049258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.049350 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.058803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-config-data\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.068644 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlhq\" (UniqueName: \"kubernetes.io/projected/190342c5-3981-449c-b18e-d6c50a550d0f-kube-api-access-fdlhq\") pod \"ceilometer-0\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.071754 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd046f7-9ee3-42ef-96e1-89706429d498" path="/var/lib/kubelet/pods/2bd046f7-9ee3-42ef-96e1-89706429d498/volumes" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.137989 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.210149 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.345669 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.382239 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.477631 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84ff7868ff-6874d"] Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.478001 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" containerName="dnsmasq-dns" containerID="cri-o://473cfd9ffe81fdb89920d89f828e46407fdf90ef17991c7ab3038fb3e0dd01a2" gracePeriod=10 Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.676126 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:54 crc kubenswrapper[4895]: W1206 09:14:54.688054 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod190342c5_3981_449c_b18e_d6c50a550d0f.slice/crio-94a60058e6b3e7173dcc6fedddbcdebf30b3e6eec195d068445f2c7363b4743b WatchSource:0}: Error finding container 94a60058e6b3e7173dcc6fedddbcdebf30b3e6eec195d068445f2c7363b4743b: Status 404 returned error can't find the container with id 94a60058e6b3e7173dcc6fedddbcdebf30b3e6eec195d068445f2c7363b4743b Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.712038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerStarted","Data":"94a60058e6b3e7173dcc6fedddbcdebf30b3e6eec195d068445f2c7363b4743b"} Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.720207 4895 generic.go:334] "Generic (PLEG): container finished" podID="879b9282-5648-4b4d-b93c-0272225d0caa" containerID="473cfd9ffe81fdb89920d89f828e46407fdf90ef17991c7ab3038fb3e0dd01a2" exitCode=0 Dec 06 09:14:54 crc kubenswrapper[4895]: I1206 09:14:54.720446 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" event={"ID":"879b9282-5648-4b4d-b93c-0272225d0caa","Type":"ContainerDied","Data":"473cfd9ffe81fdb89920d89f828e46407fdf90ef17991c7ab3038fb3e0dd01a2"} Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.156186 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.269495 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-sb\") pod \"879b9282-5648-4b4d-b93c-0272225d0caa\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.269552 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-nb\") pod \"879b9282-5648-4b4d-b93c-0272225d0caa\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.269571 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-config\") pod \"879b9282-5648-4b4d-b93c-0272225d0caa\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.269617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-dns-svc\") pod \"879b9282-5648-4b4d-b93c-0272225d0caa\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.270159 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdwtq\" (UniqueName: \"kubernetes.io/projected/879b9282-5648-4b4d-b93c-0272225d0caa-kube-api-access-tdwtq\") pod \"879b9282-5648-4b4d-b93c-0272225d0caa\" (UID: \"879b9282-5648-4b4d-b93c-0272225d0caa\") " Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.275000 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879b9282-5648-4b4d-b93c-0272225d0caa-kube-api-access-tdwtq" (OuterVolumeSpecName: "kube-api-access-tdwtq") pod "879b9282-5648-4b4d-b93c-0272225d0caa" (UID: "879b9282-5648-4b4d-b93c-0272225d0caa"). InnerVolumeSpecName "kube-api-access-tdwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.320571 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "879b9282-5648-4b4d-b93c-0272225d0caa" (UID: "879b9282-5648-4b4d-b93c-0272225d0caa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.326322 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "879b9282-5648-4b4d-b93c-0272225d0caa" (UID: "879b9282-5648-4b4d-b93c-0272225d0caa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.327698 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "879b9282-5648-4b4d-b93c-0272225d0caa" (UID: "879b9282-5648-4b4d-b93c-0272225d0caa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.329744 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-config" (OuterVolumeSpecName: "config") pod "879b9282-5648-4b4d-b93c-0272225d0caa" (UID: "879b9282-5648-4b4d-b93c-0272225d0caa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.372641 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdwtq\" (UniqueName: \"kubernetes.io/projected/879b9282-5648-4b4d-b93c-0272225d0caa-kube-api-access-tdwtq\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.372706 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.372721 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.372735 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.372748 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879b9282-5648-4b4d-b93c-0272225d0caa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.730909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerStarted","Data":"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae"} Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.731186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerStarted","Data":"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f"} Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.733931 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.736804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ff7868ff-6874d" event={"ID":"879b9282-5648-4b4d-b93c-0272225d0caa","Type":"ContainerDied","Data":"0fdb4ff90c123fb72fd998192db8d13938b5ef13c5ab5655cfa845f5a0c45b71"} Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.736896 4895 scope.go:117] "RemoveContainer" containerID="473cfd9ffe81fdb89920d89f828e46407fdf90ef17991c7ab3038fb3e0dd01a2" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.770707 4895 scope.go:117] "RemoveContainer" containerID="5d44e349a0455bffa90d3665da1db453746e3572c2ee3821e50ce8b4556eec14" Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.778682 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84ff7868ff-6874d"] Dec 06 09:14:55 crc kubenswrapper[4895]: I1206 09:14:55.790876 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84ff7868ff-6874d"] Dec 06 09:14:56 crc kubenswrapper[4895]: I1206 09:14:56.065814 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" path="/var/lib/kubelet/pods/879b9282-5648-4b4d-b93c-0272225d0caa/volumes" Dec 06 09:14:56 crc kubenswrapper[4895]: I1206 09:14:56.751076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerStarted","Data":"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d"} Dec 06 09:14:57 crc kubenswrapper[4895]: I1206 09:14:57.051969 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:14:57 crc kubenswrapper[4895]: E1206 09:14:57.052196 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:14:58 crc kubenswrapper[4895]: I1206 09:14:58.193119 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:14:58 crc kubenswrapper[4895]: I1206 09:14:58.791563 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerStarted","Data":"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232"} Dec 06 09:14:58 crc kubenswrapper[4895]: I1206 09:14:58.792000 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 09:14:58 crc kubenswrapper[4895]: I1206 09:14:58.813781 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.018069315 podStartE2EDuration="5.813748404s" podCreationTimestamp="2025-12-06 09:14:53 +0000 UTC" firstStartedPulling="2025-12-06 09:14:54.691029522 +0000 UTC m=+8257.092418392" lastFinishedPulling="2025-12-06 09:14:57.486708611 +0000 UTC m=+8259.888097481" observedRunningTime="2025-12-06 09:14:58.807792284 +0000 UTC m=+8261.209181164" watchObservedRunningTime="2025-12-06 09:14:58.813748404 +0000 UTC m=+8261.215137274" Dec 06 09:14:59 crc kubenswrapper[4895]: I1206 09:14:59.808289 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-central-agent" containerID="cri-o://c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" gracePeriod=30 Dec 06 09:14:59 crc kubenswrapper[4895]: I1206 09:14:59.808356 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="sg-core" containerID="cri-o://c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" gracePeriod=30 Dec 06 09:14:59 crc kubenswrapper[4895]: I1206 09:14:59.808326 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="proxy-httpd" containerID="cri-o://5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" gracePeriod=30 Dec 06 09:14:59 crc kubenswrapper[4895]: I1206 09:14:59.808374 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-notification-agent" containerID="cri-o://e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" gracePeriod=30 Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.163973 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g"] Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.165131 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" containerName="dnsmasq-dns" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.165162 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" containerName="dnsmasq-dns" Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.165216 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" containerName="init" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.165231 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" containerName="init" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.165694 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="879b9282-5648-4b4d-b93c-0272225d0caa" containerName="dnsmasq-dns" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.167040 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.175941 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.176195 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.205001 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g"] Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.305224 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c85da892-c08c-4b71-82fe-86dc94d0e837-config-volume\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.305269 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbts6\" (UniqueName: \"kubernetes.io/projected/c85da892-c08c-4b71-82fe-86dc94d0e837-kube-api-access-pbts6\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.305321 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c85da892-c08c-4b71-82fe-86dc94d0e837-secret-volume\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.407555 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c85da892-c08c-4b71-82fe-86dc94d0e837-config-volume\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.407618 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbts6\" (UniqueName: \"kubernetes.io/projected/c85da892-c08c-4b71-82fe-86dc94d0e837-kube-api-access-pbts6\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.407677 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c85da892-c08c-4b71-82fe-86dc94d0e837-secret-volume\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.410890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c85da892-c08c-4b71-82fe-86dc94d0e837-config-volume\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.418309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c85da892-c08c-4b71-82fe-86dc94d0e837-secret-volume\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.424289 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbts6\" (UniqueName: \"kubernetes.io/projected/c85da892-c08c-4b71-82fe-86dc94d0e837-kube-api-access-pbts6\") pod \"collect-profiles-29416875-2j56g\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.516681 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.535191 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod190342c5_3981_449c_b18e_d6c50a550d0f.slice/crio-conmon-c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f.scope\": RecentStats: unable to find data in memory cache]" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.748592 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.817712 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-sg-core-conf-yaml\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.817792 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-log-httpd\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.817819 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdlhq\" (UniqueName: \"kubernetes.io/projected/190342c5-3981-449c-b18e-d6c50a550d0f-kube-api-access-fdlhq\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.817883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-combined-ca-bundle\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.817934 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-config-data\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.817987 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-run-httpd\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.818143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-scripts\") pod \"190342c5-3981-449c-b18e-d6c50a550d0f\" (UID: \"190342c5-3981-449c-b18e-d6c50a550d0f\") " Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.820118 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.823966 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.824069 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-scripts" (OuterVolumeSpecName: "scripts") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.824177 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190342c5-3981-449c-b18e-d6c50a550d0f-kube-api-access-fdlhq" (OuterVolumeSpecName: "kube-api-access-fdlhq") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "kube-api-access-fdlhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834163 4895 generic.go:334] "Generic (PLEG): container finished" podID="190342c5-3981-449c-b18e-d6c50a550d0f" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" exitCode=0 Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834204 4895 generic.go:334] "Generic (PLEG): container finished" podID="190342c5-3981-449c-b18e-d6c50a550d0f" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" exitCode=2 Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834215 4895 generic.go:334] "Generic (PLEG): container finished" podID="190342c5-3981-449c-b18e-d6c50a550d0f" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" exitCode=0 Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834223 4895 generic.go:334] "Generic (PLEG): container finished" podID="190342c5-3981-449c-b18e-d6c50a550d0f" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" exitCode=0 Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerDied","Data":"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232"} Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerDied","Data":"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d"} Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834293 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerDied","Data":"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae"} Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerDied","Data":"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f"} Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"190342c5-3981-449c-b18e-d6c50a550d0f","Type":"ContainerDied","Data":"94a60058e6b3e7173dcc6fedddbcdebf30b3e6eec195d068445f2c7363b4743b"} Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834332 4895 scope.go:117] "RemoveContainer" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.834511 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.856663 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.857164 4895 scope.go:117] "RemoveContainer" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.876261 4895 scope.go:117] "RemoveContainer" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.900200 4895 scope.go:117] "RemoveContainer" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.904706 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.920048 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.920088 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.920098 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.920134 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdlhq\" (UniqueName: \"kubernetes.io/projected/190342c5-3981-449c-b18e-d6c50a550d0f-kube-api-access-fdlhq\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.920143 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.920151 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/190342c5-3981-449c-b18e-d6c50a550d0f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.921844 4895 scope.go:117] "RemoveContainer" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.924078 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": container with ID starting with 5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232 not found: ID does not exist" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.924125 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232"} err="failed to get container status \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": rpc error: code = NotFound desc = could not find container \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": container with ID starting with 5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232 not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.924155 4895 scope.go:117] "RemoveContainer" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.924660 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": container with ID starting with c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d not found: ID does not exist" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.924689 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d"} err="failed to get container status \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": rpc error: code = NotFound desc = could not find container \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": container with ID starting with c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.924709 4895 scope.go:117] "RemoveContainer" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.925006 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": container with ID starting with e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae not found: ID does not exist" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925038 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae"} err="failed to get container status \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": rpc error: code = NotFound desc = could not find container \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": container with ID starting with e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925054 4895 scope.go:117] "RemoveContainer" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" Dec 06 09:15:00 crc kubenswrapper[4895]: E1206 09:15:00.925289 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": container with ID starting with c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f not found: ID does not exist" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925305 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f"} err="failed to get container status \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": rpc error: code = NotFound desc = could not find container \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": container with ID starting with c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925319 4895 scope.go:117] "RemoveContainer" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925599 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232"} err="failed to get container status \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": rpc error: code = NotFound desc = could not find container \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": container with ID starting with 5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232 not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925616 4895 scope.go:117] "RemoveContainer" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925858 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d"} err="failed to get container status \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": rpc error: code = NotFound desc = could not find container \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": container with ID starting with c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.925879 4895 scope.go:117] "RemoveContainer" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926135 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae"} err="failed to get container status \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": rpc error: code = NotFound desc = could not find container \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": container with ID starting with e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926158 4895 scope.go:117] "RemoveContainer" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926399 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f"} err="failed to get container status \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": rpc error: code = NotFound desc = could not find container \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": container with ID starting with c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926420 4895 scope.go:117] "RemoveContainer" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926675 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232"} err="failed to get container status \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": rpc error: code = NotFound desc = could not find container \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": container with ID starting with 5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232 not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926711 4895 scope.go:117] "RemoveContainer" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.926983 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d"} err="failed to get container status \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": rpc error: code = NotFound desc = could not find container \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": container with ID starting with c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927005 4895 scope.go:117] "RemoveContainer" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927319 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae"} err="failed to get container status \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": rpc error: code = NotFound desc = could not find container \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": container with ID starting with e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927346 4895 scope.go:117] "RemoveContainer" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927602 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f"} err="failed to get container status \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": rpc error: code = NotFound desc = could not find container \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": container with ID starting with c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927617 4895 scope.go:117] "RemoveContainer" containerID="5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927863 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232"} err="failed to get container status \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": rpc error: code = NotFound desc = could not find container \"5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232\": container with ID starting with 5f305fa0dfd27ee60ab51fbfd958a06644bd97562de566690b68ec3a3cf93232 not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.927883 4895 scope.go:117] "RemoveContainer" containerID="c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.928157 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d"} err="failed to get container status \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": rpc error: code = NotFound desc = could not find container \"c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d\": container with ID starting with c067172eca499ce91e7882bd7936dcb56470731ff860bc0174f06fe538a7aa0d not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.928180 4895 scope.go:117] "RemoveContainer" containerID="e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.928218 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-config-data" (OuterVolumeSpecName: "config-data") pod "190342c5-3981-449c-b18e-d6c50a550d0f" (UID: "190342c5-3981-449c-b18e-d6c50a550d0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.928389 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae"} err="failed to get container status \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": rpc error: code = NotFound desc = could not find container \"e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae\": container with ID starting with e170eae8094909c98c856a44b6d8ac7e18f523974841f9aa8f55a2eb8cd64aae not found: ID does not exist" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.928415 4895 scope.go:117] "RemoveContainer" containerID="c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f" Dec 06 09:15:00 crc kubenswrapper[4895]: I1206 09:15:00.928745 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f"} err="failed to get container status \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": rpc error: code = NotFound desc = could not find container \"c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f\": container with ID starting with c5ae7d20860bef60e780c01dee7212d633107a415c0df230200f2807a5daea3f not found: ID does not exist" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.007380 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g"] Dec 06 09:15:01 crc kubenswrapper[4895]: W1206 09:15:01.018511 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc85da892_c08c_4b71_82fe_86dc94d0e837.slice/crio-687997547deafc342d93ded0f28eacafc81e4058ccb4e80c665d2d2fdda14d04 WatchSource:0}: Error finding container 687997547deafc342d93ded0f28eacafc81e4058ccb4e80c665d2d2fdda14d04: Status 404 returned error can't find the container with id 687997547deafc342d93ded0f28eacafc81e4058ccb4e80c665d2d2fdda14d04 Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.021595 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/190342c5-3981-449c-b18e-d6c50a550d0f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.178101 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.192350 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208086 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:15:01 crc kubenswrapper[4895]: E1206 09:15:01.208597 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="sg-core" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208613 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="sg-core" Dec 06 09:15:01 crc kubenswrapper[4895]: E1206 09:15:01.208634 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-central-agent" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208640 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-central-agent" Dec 06 09:15:01 crc kubenswrapper[4895]: E1206 09:15:01.208680 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-notification-agent" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208686 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-notification-agent" Dec 06 09:15:01 crc kubenswrapper[4895]: E1206 09:15:01.208702 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="proxy-httpd" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208708 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="proxy-httpd" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208878 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="proxy-httpd" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208902 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-central-agent" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208911 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="ceilometer-notification-agent" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.208923 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" containerName="sg-core" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.210963 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.213797 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.214007 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.221225 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.326987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61388f73-0de5-4805-8ba7-4b683db03bdb-run-httpd\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.327080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-scripts\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.327144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-config-data\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.327178 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.327364 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61388f73-0de5-4805-8ba7-4b683db03bdb-log-httpd\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.327421 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87rb\" (UniqueName: \"kubernetes.io/projected/61388f73-0de5-4805-8ba7-4b683db03bdb-kube-api-access-f87rb\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.327705 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.430372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-scripts\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.430527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-config-data\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.430593 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.430666 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61388f73-0de5-4805-8ba7-4b683db03bdb-log-httpd\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.430695 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f87rb\" (UniqueName: \"kubernetes.io/projected/61388f73-0de5-4805-8ba7-4b683db03bdb-kube-api-access-f87rb\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.430921 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.431274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61388f73-0de5-4805-8ba7-4b683db03bdb-run-httpd\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.433027 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61388f73-0de5-4805-8ba7-4b683db03bdb-log-httpd\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.434030 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61388f73-0de5-4805-8ba7-4b683db03bdb-run-httpd\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.439609 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-scripts\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.449586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.454198 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.455380 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61388f73-0de5-4805-8ba7-4b683db03bdb-config-data\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.463997 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87rb\" (UniqueName: \"kubernetes.io/projected/61388f73-0de5-4805-8ba7-4b683db03bdb-kube-api-access-f87rb\") pod \"ceilometer-0\" (UID: \"61388f73-0de5-4805-8ba7-4b683db03bdb\") " pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.529521 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.847236 4895 generic.go:334] "Generic (PLEG): container finished" podID="c85da892-c08c-4b71-82fe-86dc94d0e837" containerID="67481ac33953021457144827293631ae4bfa12324f3209e9c0aa52ad5949bbdb" exitCode=0 Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.847337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" event={"ID":"c85da892-c08c-4b71-82fe-86dc94d0e837","Type":"ContainerDied","Data":"67481ac33953021457144827293631ae4bfa12324f3209e9c0aa52ad5949bbdb"} Dec 06 09:15:01 crc kubenswrapper[4895]: I1206 09:15:01.847589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" event={"ID":"c85da892-c08c-4b71-82fe-86dc94d0e837","Type":"ContainerStarted","Data":"687997547deafc342d93ded0f28eacafc81e4058ccb4e80c665d2d2fdda14d04"} Dec 06 09:15:02 crc kubenswrapper[4895]: I1206 09:15:02.003959 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 09:15:02 crc kubenswrapper[4895]: I1206 09:15:02.064222 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190342c5-3981-449c-b18e-d6c50a550d0f" path="/var/lib/kubelet/pods/190342c5-3981-449c-b18e-d6c50a550d0f/volumes" Dec 06 09:15:02 crc kubenswrapper[4895]: I1206 09:15:02.860505 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61388f73-0de5-4805-8ba7-4b683db03bdb","Type":"ContainerStarted","Data":"bc73653a9057b8b48d6304370dd1bdce5bba67c76427e06a2116acb2158a3c1b"} Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.270991 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.370877 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c85da892-c08c-4b71-82fe-86dc94d0e837-secret-volume\") pod \"c85da892-c08c-4b71-82fe-86dc94d0e837\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.370946 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbts6\" (UniqueName: \"kubernetes.io/projected/c85da892-c08c-4b71-82fe-86dc94d0e837-kube-api-access-pbts6\") pod \"c85da892-c08c-4b71-82fe-86dc94d0e837\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.371083 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c85da892-c08c-4b71-82fe-86dc94d0e837-config-volume\") pod \"c85da892-c08c-4b71-82fe-86dc94d0e837\" (UID: \"c85da892-c08c-4b71-82fe-86dc94d0e837\") " Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.372547 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85da892-c08c-4b71-82fe-86dc94d0e837-config-volume" (OuterVolumeSpecName: "config-volume") pod "c85da892-c08c-4b71-82fe-86dc94d0e837" (UID: "c85da892-c08c-4b71-82fe-86dc94d0e837"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.376193 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85da892-c08c-4b71-82fe-86dc94d0e837-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c85da892-c08c-4b71-82fe-86dc94d0e837" (UID: "c85da892-c08c-4b71-82fe-86dc94d0e837"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.376375 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85da892-c08c-4b71-82fe-86dc94d0e837-kube-api-access-pbts6" (OuterVolumeSpecName: "kube-api-access-pbts6") pod "c85da892-c08c-4b71-82fe-86dc94d0e837" (UID: "c85da892-c08c-4b71-82fe-86dc94d0e837"). InnerVolumeSpecName "kube-api-access-pbts6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.473704 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c85da892-c08c-4b71-82fe-86dc94d0e837-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.474019 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbts6\" (UniqueName: \"kubernetes.io/projected/c85da892-c08c-4b71-82fe-86dc94d0e837-kube-api-access-pbts6\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.474103 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c85da892-c08c-4b71-82fe-86dc94d0e837-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.879255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" event={"ID":"c85da892-c08c-4b71-82fe-86dc94d0e837","Type":"ContainerDied","Data":"687997547deafc342d93ded0f28eacafc81e4058ccb4e80c665d2d2fdda14d04"} Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.879337 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="687997547deafc342d93ded0f28eacafc81e4058ccb4e80c665d2d2fdda14d04" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.879555 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g" Dec 06 09:15:03 crc kubenswrapper[4895]: I1206 09:15:03.881240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61388f73-0de5-4805-8ba7-4b683db03bdb","Type":"ContainerStarted","Data":"17d2179e12ef1f2d7e3af60916c868ca86bd3b18e444f18ea97f802da2dc67ad"} Dec 06 09:15:04 crc kubenswrapper[4895]: I1206 09:15:04.336819 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f"] Dec 06 09:15:04 crc kubenswrapper[4895]: I1206 09:15:04.350284 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416830-q8b4f"] Dec 06 09:15:04 crc kubenswrapper[4895]: I1206 09:15:04.897706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61388f73-0de5-4805-8ba7-4b683db03bdb","Type":"ContainerStarted","Data":"803006196f2936b3effd2129c60f21b534a0de9012db0fc2c73202d5473f2b13"} Dec 06 09:15:04 crc kubenswrapper[4895]: I1206 09:15:04.898025 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61388f73-0de5-4805-8ba7-4b683db03bdb","Type":"ContainerStarted","Data":"1f0b3adf37235dba3de07f6aefbf8a1403a9325cdbaa2b56c5c123dcd566cd61"} Dec 06 09:15:05 crc kubenswrapper[4895]: I1206 09:15:05.865148 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 06 09:15:06 crc kubenswrapper[4895]: I1206 09:15:06.310515 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805647f2-c388-4398-bebd-1e8e86021eac" path="/var/lib/kubelet/pods/805647f2-c388-4398-bebd-1e8e86021eac/volumes" Dec 06 09:15:06 crc kubenswrapper[4895]: I1206 09:15:06.311594 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 06 09:15:06 crc kubenswrapper[4895]: I1206 09:15:06.311644 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Dec 06 09:15:06 crc kubenswrapper[4895]: I1206 09:15:06.924374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61388f73-0de5-4805-8ba7-4b683db03bdb","Type":"ContainerStarted","Data":"3e6444e7897039ff35d10306e2672883dd272c5572b9923771800247e814cef7"} Dec 06 09:15:06 crc kubenswrapper[4895]: I1206 09:15:06.924904 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 09:15:06 crc kubenswrapper[4895]: I1206 09:15:06.943537 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5821446780000001 podStartE2EDuration="5.943519782s" podCreationTimestamp="2025-12-06 09:15:01 +0000 UTC" firstStartedPulling="2025-12-06 09:15:02.003338727 +0000 UTC m=+8264.404727597" lastFinishedPulling="2025-12-06 09:15:06.364713831 +0000 UTC m=+8268.766102701" observedRunningTime="2025-12-06 09:15:06.941449506 +0000 UTC m=+8269.342838366" watchObservedRunningTime="2025-12-06 09:15:06.943519782 +0000 UTC m=+8269.344908652" Dec 06 09:15:10 crc kubenswrapper[4895]: I1206 09:15:10.051247 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:15:10 crc kubenswrapper[4895]: E1206 09:15:10.052029 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:15:18 crc kubenswrapper[4895]: I1206 09:15:18.503932 4895 scope.go:117] "RemoveContainer" containerID="1a5542954132af197a82ce5da3bd2449108a679c54c1eb449bdf81901571b593" Dec 06 09:15:18 crc kubenswrapper[4895]: I1206 09:15:18.545578 4895 scope.go:117] "RemoveContainer" containerID="23181971eca138bef2408c3aada42b7a2d1e37ace7b6682a55e1b6280fbf7394" Dec 06 09:15:18 crc kubenswrapper[4895]: I1206 09:15:18.600997 4895 scope.go:117] "RemoveContainer" containerID="3d790c8dce46674c9701bfa1c5117af3b38a03df3df8c56bc5ef1bc84e94d6a5" Dec 06 09:15:18 crc kubenswrapper[4895]: I1206 09:15:18.645668 4895 scope.go:117] "RemoveContainer" containerID="a9423150e014fd2226b8518a87c4ed9f218df17a85588a10e64848803b805f5c" Dec 06 09:15:23 crc kubenswrapper[4895]: I1206 09:15:23.051122 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:15:23 crc kubenswrapper[4895]: E1206 09:15:23.052995 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:15:31 crc kubenswrapper[4895]: I1206 09:15:31.536236 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 09:15:34 crc kubenswrapper[4895]: I1206 09:15:34.050700 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:15:34 crc kubenswrapper[4895]: I1206 09:15:34.278617 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"25a2f0f5587d8e1cb90d91095a50faeb855712cd694f679babd607bf56b409df"} Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.043360 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-l49xr"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.056662 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-09cb-account-create-update-fnzlp"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.063083 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-n48s9"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.075177 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1aad-account-create-update-ghbcg"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.087569 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2pdp2"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.096165 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-373a-account-create-update-bfl5s"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.104606 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1aad-account-create-update-ghbcg"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.113108 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-373a-account-create-update-bfl5s"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.121270 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-l49xr"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.128932 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-n48s9"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.137562 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-09cb-account-create-update-fnzlp"] Dec 06 09:15:41 crc kubenswrapper[4895]: I1206 09:15:41.147039 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2pdp2"] Dec 06 09:15:42 crc kubenswrapper[4895]: I1206 09:15:42.062408 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34817151-f795-43b8-9cb9-649f029b2a3b" path="/var/lib/kubelet/pods/34817151-f795-43b8-9cb9-649f029b2a3b/volumes" Dec 06 09:15:42 crc kubenswrapper[4895]: I1206 09:15:42.063211 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4123333a-291c-44d8-9cdd-15c599ffadd6" path="/var/lib/kubelet/pods/4123333a-291c-44d8-9cdd-15c599ffadd6/volumes" Dec 06 09:15:42 crc kubenswrapper[4895]: I1206 09:15:42.063838 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6caea0b7-69b6-4654-bc0e-a97dda98981d" path="/var/lib/kubelet/pods/6caea0b7-69b6-4654-bc0e-a97dda98981d/volumes" Dec 06 09:15:42 crc kubenswrapper[4895]: I1206 09:15:42.064349 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b" path="/var/lib/kubelet/pods/6ddb8067-74d4-4f4a-9c8d-1dc9bc96440b/volumes" Dec 06 09:15:42 crc kubenswrapper[4895]: I1206 09:15:42.065397 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab750e80-a339-4c38-89e3-a3595ecd1d09" path="/var/lib/kubelet/pods/ab750e80-a339-4c38-89e3-a3595ecd1d09/volumes" Dec 06 09:15:42 crc kubenswrapper[4895]: I1206 09:15:42.065907 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abdba00f-a4bf-4106-ace1-cc6b0dba6b42" path="/var/lib/kubelet/pods/abdba00f-a4bf-4106-ace1-cc6b0dba6b42/volumes" Dec 06 09:15:49 crc kubenswrapper[4895]: I1206 09:15:49.979538 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67d98bfd5-7pdzq"] Dec 06 09:15:49 crc kubenswrapper[4895]: E1206 09:15:49.980330 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85da892-c08c-4b71-82fe-86dc94d0e837" containerName="collect-profiles" Dec 06 09:15:49 crc kubenswrapper[4895]: I1206 09:15:49.980343 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85da892-c08c-4b71-82fe-86dc94d0e837" containerName="collect-profiles" Dec 06 09:15:49 crc kubenswrapper[4895]: I1206 09:15:49.980589 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85da892-c08c-4b71-82fe-86dc94d0e837" containerName="collect-profiles" Dec 06 09:15:49 crc kubenswrapper[4895]: I1206 09:15:49.982035 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:49 crc kubenswrapper[4895]: I1206 09:15:49.984152 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.004267 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d98bfd5-7pdzq"] Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.076941 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d98bfd5-7pdzq"] Dec 06 09:15:50 crc kubenswrapper[4895]: E1206 09:15:50.077802 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-jlrhh openstack-cell1 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[config dns-svc kube-api-access-jlrhh openstack-cell1 ovsdbserver-nb ovsdbserver-sb]: context canceled" pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" podUID="c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.105400 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bdc4bcf9-2gjfd"] Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.115691 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.119386 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-networker" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.133448 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-nb\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.133552 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrhh\" (UniqueName: \"kubernetes.io/projected/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-kube-api-access-jlrhh\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.133608 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-config\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.133655 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-sb\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.133733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-openstack-cell1\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.133860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-dns-svc\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.134696 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bdc4bcf9-2gjfd"] Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.235626 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-nb\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.235685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-nb\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.235805 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-config\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.235862 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xms7d\" (UniqueName: \"kubernetes.io/projected/39edd877-f997-45cd-9033-59e390656ef9-kube-api-access-xms7d\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236000 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrhh\" (UniqueName: \"kubernetes.io/projected/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-kube-api-access-jlrhh\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236108 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-sb\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-networker\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-config\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-cell1\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-dns-svc\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236356 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-sb\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-openstack-cell1\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236820 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-nb\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.236985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-dns-svc\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.237069 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-config\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.237413 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-sb\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.237532 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-openstack-cell1\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.237636 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-dns-svc\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.262066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrhh\" (UniqueName: \"kubernetes.io/projected/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-kube-api-access-jlrhh\") pod \"dnsmasq-dns-67d98bfd5-7pdzq\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339074 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-cell1\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-dns-svc\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339346 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-nb\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339389 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-config\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339418 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xms7d\" (UniqueName: \"kubernetes.io/projected/39edd877-f997-45cd-9033-59e390656ef9-kube-api-access-xms7d\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339450 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-sb\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.339503 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-networker\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.340051 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-cell1\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.340367 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-networker\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.340707 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-config\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.341774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-nb\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.342041 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-dns-svc\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.342255 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-sb\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.360397 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xms7d\" (UniqueName: \"kubernetes.io/projected/39edd877-f997-45cd-9033-59e390656ef9-kube-api-access-xms7d\") pod \"dnsmasq-dns-bdc4bcf9-2gjfd\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.440788 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.498360 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.569629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.649283 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-sb\") pod \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.649371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlrhh\" (UniqueName: \"kubernetes.io/projected/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-kube-api-access-jlrhh\") pod \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.649456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-openstack-cell1\") pod \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.649533 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-dns-svc\") pod \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.649725 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-nb\") pod \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.649773 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-config\") pod \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\" (UID: \"c8cfbd67-e0d3-4191-b1e7-3b9eea44e409\") " Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.650881 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" (UID: "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.650927 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-config" (OuterVolumeSpecName: "config") pod "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" (UID: "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.650995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" (UID: "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.651248 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" (UID: "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.651681 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" (UID: "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.674781 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-kube-api-access-jlrhh" (OuterVolumeSpecName: "kube-api-access-jlrhh") pod "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" (UID: "c8cfbd67-e0d3-4191-b1e7-3b9eea44e409"). InnerVolumeSpecName "kube-api-access-jlrhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.751982 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.752019 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.752028 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.752039 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlrhh\" (UniqueName: \"kubernetes.io/projected/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-kube-api-access-jlrhh\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.752049 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:50 crc kubenswrapper[4895]: I1206 09:15:50.752057 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.021110 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bdc4bcf9-2gjfd"] Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.509703 4895 generic.go:334] "Generic (PLEG): container finished" podID="39edd877-f997-45cd-9033-59e390656ef9" containerID="791c0c2cd4da1f92190750172fcfc4c3b7dae3548a1984facfe6231b921dad32" exitCode=0 Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.509776 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" event={"ID":"39edd877-f997-45cd-9033-59e390656ef9","Type":"ContainerDied","Data":"791c0c2cd4da1f92190750172fcfc4c3b7dae3548a1984facfe6231b921dad32"} Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.509943 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d98bfd5-7pdzq" Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.509997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" event={"ID":"39edd877-f997-45cd-9033-59e390656ef9","Type":"ContainerStarted","Data":"382e24111b22785652965269a323d02b5adee0ed84b87e9a80bba46f3472acda"} Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.747126 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d98bfd5-7pdzq"] Dec 06 09:15:51 crc kubenswrapper[4895]: I1206 09:15:51.758116 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67d98bfd5-7pdzq"] Dec 06 09:15:52 crc kubenswrapper[4895]: I1206 09:15:52.067225 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cfbd67-e0d3-4191-b1e7-3b9eea44e409" path="/var/lib/kubelet/pods/c8cfbd67-e0d3-4191-b1e7-3b9eea44e409/volumes" Dec 06 09:15:52 crc kubenswrapper[4895]: I1206 09:15:52.521557 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" event={"ID":"39edd877-f997-45cd-9033-59e390656ef9","Type":"ContainerStarted","Data":"edf5e39571b1db843221af8ea89fce816b0edc70027522c6c6162a18c0b30eed"} Dec 06 09:15:52 crc kubenswrapper[4895]: I1206 09:15:52.521993 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:15:52 crc kubenswrapper[4895]: I1206 09:15:52.544503 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" podStartSLOduration=2.544463354 podStartE2EDuration="2.544463354s" podCreationTimestamp="2025-12-06 09:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:15:52.541284869 +0000 UTC m=+8314.942673729" watchObservedRunningTime="2025-12-06 09:15:52.544463354 +0000 UTC m=+8314.945852224" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.442785 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.530352 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9f6fd977-kvrq4"] Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.530586 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerName="dnsmasq-dns" containerID="cri-o://64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83" gracePeriod=10 Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.797763 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d956c9f-d22gw"] Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.800756 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.816061 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d956c9f-d22gw"] Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872155 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwj6\" (UniqueName: \"kubernetes.io/projected/b23b8a93-0090-41d5-920d-adffc40e524f-kube-api-access-5jwj6\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872611 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-cell1\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872640 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-dns-svc\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872662 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-sb\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872782 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-nb\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872821 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-networker\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.872838 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-config\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.920406 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d956c9f-d22gw"] Dec 06 09:16:00 crc kubenswrapper[4895]: E1206 09:16:00.922124 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-5jwj6 openstack-cell1 openstack-networker ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-568d956c9f-d22gw" podUID="b23b8a93-0090-41d5-920d-adffc40e524f" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.973670 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db9b89b7-w5g8t"] Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.975874 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.978680 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-cell1\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.978732 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-dns-svc\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.978763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-sb\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.978923 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-nb\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.978982 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-networker\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.979006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-config\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.979064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwj6\" (UniqueName: \"kubernetes.io/projected/b23b8a93-0090-41d5-920d-adffc40e524f-kube-api-access-5jwj6\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.979636 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-cell1\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.979952 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-dns-svc\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.980551 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-nb\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.982730 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db9b89b7-w5g8t"] Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.985313 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-sb\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.985507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-config\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:00 crc kubenswrapper[4895]: I1206 09:16:00.986355 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-networker\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.014285 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwj6\" (UniqueName: \"kubernetes.io/projected/b23b8a93-0090-41d5-920d-adffc40e524f-kube-api-access-5jwj6\") pod \"dnsmasq-dns-568d956c9f-d22gw\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.036335 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwhq"] Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.045730 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwhq"] Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.080788 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-openstack-networker\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.080852 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp475\" (UniqueName: \"kubernetes.io/projected/7e30aff9-56a4-49d8-84f6-f3a22994eff5-kube-api-access-mp475\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.080877 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-ovsdbserver-sb\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.081056 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-ovsdbserver-nb\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.081123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-openstack-cell1\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.081239 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-dns-svc\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.081305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-config\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-openstack-networker\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185446 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp475\" (UniqueName: \"kubernetes.io/projected/7e30aff9-56a4-49d8-84f6-f3a22994eff5-kube-api-access-mp475\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185493 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-ovsdbserver-sb\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-ovsdbserver-nb\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-openstack-cell1\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185658 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-dns-svc\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.185708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-config\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.187565 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-dns-svc\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.188522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-config\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.188835 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-ovsdbserver-sb\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.189877 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-openstack-cell1\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.190011 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-openstack-networker\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.190163 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e30aff9-56a4-49d8-84f6-f3a22994eff5-ovsdbserver-nb\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.215041 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp475\" (UniqueName: \"kubernetes.io/projected/7e30aff9-56a4-49d8-84f6-f3a22994eff5-kube-api-access-mp475\") pod \"dnsmasq-dns-8db9b89b7-w5g8t\" (UID: \"7e30aff9-56a4-49d8-84f6-f3a22994eff5\") " pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.277744 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.300892 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.389372 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-config\") pod \"9751d574-27fb-414c-bfc1-c3c22ddf675d\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.389789 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thtb\" (UniqueName: \"kubernetes.io/projected/9751d574-27fb-414c-bfc1-c3c22ddf675d-kube-api-access-4thtb\") pod \"9751d574-27fb-414c-bfc1-c3c22ddf675d\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.390003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-nb\") pod \"9751d574-27fb-414c-bfc1-c3c22ddf675d\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.390169 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-dns-svc\") pod \"9751d574-27fb-414c-bfc1-c3c22ddf675d\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.390291 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-sb\") pod \"9751d574-27fb-414c-bfc1-c3c22ddf675d\" (UID: \"9751d574-27fb-414c-bfc1-c3c22ddf675d\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.394878 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9751d574-27fb-414c-bfc1-c3c22ddf675d-kube-api-access-4thtb" (OuterVolumeSpecName: "kube-api-access-4thtb") pod "9751d574-27fb-414c-bfc1-c3c22ddf675d" (UID: "9751d574-27fb-414c-bfc1-c3c22ddf675d"). InnerVolumeSpecName "kube-api-access-4thtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.470533 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9751d574-27fb-414c-bfc1-c3c22ddf675d" (UID: "9751d574-27fb-414c-bfc1-c3c22ddf675d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.480963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-config" (OuterVolumeSpecName: "config") pod "9751d574-27fb-414c-bfc1-c3c22ddf675d" (UID: "9751d574-27fb-414c-bfc1-c3c22ddf675d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.487343 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9751d574-27fb-414c-bfc1-c3c22ddf675d" (UID: "9751d574-27fb-414c-bfc1-c3c22ddf675d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.494914 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.494945 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thtb\" (UniqueName: \"kubernetes.io/projected/9751d574-27fb-414c-bfc1-c3c22ddf675d-kube-api-access-4thtb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.494973 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.494981 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.498643 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9751d574-27fb-414c-bfc1-c3c22ddf675d" (UID: "9751d574-27fb-414c-bfc1-c3c22ddf675d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.597251 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9751d574-27fb-414c-bfc1-c3c22ddf675d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.628111 4895 generic.go:334] "Generic (PLEG): container finished" podID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerID="64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83" exitCode=0 Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.628211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.628871 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.636322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" event={"ID":"9751d574-27fb-414c-bfc1-c3c22ddf675d","Type":"ContainerDied","Data":"64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83"} Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.636423 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9f6fd977-kvrq4" event={"ID":"9751d574-27fb-414c-bfc1-c3c22ddf675d","Type":"ContainerDied","Data":"1abcc609a2dcb39857d79c445b77c5981f60252cc2e7b96baf6be8c5c58662a0"} Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.636444 4895 scope.go:117] "RemoveContainer" containerID="64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.702306 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.721374 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9f6fd977-kvrq4"] Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.724188 4895 scope.go:117] "RemoveContainer" containerID="4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.729684 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9f6fd977-kvrq4"] Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.744876 4895 scope.go:117] "RemoveContainer" containerID="64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83" Dec 06 09:16:01 crc kubenswrapper[4895]: E1206 09:16:01.745494 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83\": container with ID starting with 64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83 not found: ID does not exist" containerID="64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.745527 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83"} err="failed to get container status \"64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83\": rpc error: code = NotFound desc = could not find container \"64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83\": container with ID starting with 64421d72f741bd029b2ff5caca76fa228a9395c65ffed4ede6d1c4654adfae83 not found: ID does not exist" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.745547 4895 scope.go:117] "RemoveContainer" containerID="4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3" Dec 06 09:16:01 crc kubenswrapper[4895]: E1206 09:16:01.745949 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3\": container with ID starting with 4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3 not found: ID does not exist" containerID="4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.745974 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3"} err="failed to get container status \"4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3\": rpc error: code = NotFound desc = could not find container \"4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3\": container with ID starting with 4cb51266d9a8929c79ea732b30673cbc0fba6ca56f3c6180e0fe6a7ab2ef00c3 not found: ID does not exist" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.801734 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-cell1\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.801790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-config\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.801827 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-networker\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.801929 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-sb\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.802047 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-dns-svc\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.802114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jwj6\" (UniqueName: \"kubernetes.io/projected/b23b8a93-0090-41d5-920d-adffc40e524f-kube-api-access-5jwj6\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.802146 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-nb\") pod \"b23b8a93-0090-41d5-920d-adffc40e524f\" (UID: \"b23b8a93-0090-41d5-920d-adffc40e524f\") " Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.802345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.802712 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.802713 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.803065 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-config" (OuterVolumeSpecName: "config") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.803243 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.803532 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-networker" (OuterVolumeSpecName: "openstack-networker") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "openstack-networker". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.803928 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.818305 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23b8a93-0090-41d5-920d-adffc40e524f-kube-api-access-5jwj6" (OuterVolumeSpecName: "kube-api-access-5jwj6") pod "b23b8a93-0090-41d5-920d-adffc40e524f" (UID: "b23b8a93-0090-41d5-920d-adffc40e524f"). InnerVolumeSpecName "kube-api-access-5jwj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.820585 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db9b89b7-w5g8t"] Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.905086 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.905136 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-openstack-networker\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.905149 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.905160 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.905172 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jwj6\" (UniqueName: \"kubernetes.io/projected/b23b8a93-0090-41d5-920d-adffc40e524f-kube-api-access-5jwj6\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:01 crc kubenswrapper[4895]: I1206 09:16:01.905184 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b8a93-0090-41d5-920d-adffc40e524f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.061774 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" path="/var/lib/kubelet/pods/9751d574-27fb-414c-bfc1-c3c22ddf675d/volumes" Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.062805 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d34db9-637c-4d97-a22b-0853b943a309" path="/var/lib/kubelet/pods/f7d34db9-637c-4d97-a22b-0853b943a309/volumes" Dec 06 09:16:02 crc kubenswrapper[4895]: E1206 09:16:02.373274 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb23b8a93_0090_41d5_920d_adffc40e524f.slice\": RecentStats: unable to find data in memory cache]" Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.638005 4895 generic.go:334] "Generic (PLEG): container finished" podID="7e30aff9-56a4-49d8-84f6-f3a22994eff5" containerID="b57c2b1a8b491b88101adfdd63424eab34677578b74633548c4717dd831dda0a" exitCode=0 Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.638131 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" event={"ID":"7e30aff9-56a4-49d8-84f6-f3a22994eff5","Type":"ContainerDied","Data":"b57c2b1a8b491b88101adfdd63424eab34677578b74633548c4717dd831dda0a"} Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.638407 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" event={"ID":"7e30aff9-56a4-49d8-84f6-f3a22994eff5","Type":"ContainerStarted","Data":"5750dad2cc9e71eac698414a6c14f72cbe61d370ee21d78a570b4ed950a52aa0"} Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.651709 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d956c9f-d22gw" Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.855573 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d956c9f-d22gw"] Dec 06 09:16:02 crc kubenswrapper[4895]: I1206 09:16:02.862642 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d956c9f-d22gw"] Dec 06 09:16:03 crc kubenswrapper[4895]: I1206 09:16:03.678893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" event={"ID":"7e30aff9-56a4-49d8-84f6-f3a22994eff5","Type":"ContainerStarted","Data":"3e31a99fb7ce0a40433cd4b70dbf703e182429f72d6ab35bdf5f2a4fdf0d63eb"} Dec 06 09:16:03 crc kubenswrapper[4895]: I1206 09:16:03.679629 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:03 crc kubenswrapper[4895]: I1206 09:16:03.711690 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" podStartSLOduration=3.7116665859999998 podStartE2EDuration="3.711666586s" podCreationTimestamp="2025-12-06 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:16:03.702019207 +0000 UTC m=+8326.103408077" watchObservedRunningTime="2025-12-06 09:16:03.711666586 +0000 UTC m=+8326.113055456" Dec 06 09:16:04 crc kubenswrapper[4895]: I1206 09:16:04.062698 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23b8a93-0090-41d5-920d-adffc40e524f" path="/var/lib/kubelet/pods/b23b8a93-0090-41d5-920d-adffc40e524f/volumes" Dec 06 09:16:07 crc kubenswrapper[4895]: I1206 09:16:07.424023 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8db9b89b7-w5g8t" Dec 06 09:16:07 crc kubenswrapper[4895]: I1206 09:16:07.537754 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bdc4bcf9-2gjfd"] Dec 06 09:16:07 crc kubenswrapper[4895]: I1206 09:16:07.538264 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" podUID="39edd877-f997-45cd-9033-59e390656ef9" containerName="dnsmasq-dns" containerID="cri-o://edf5e39571b1db843221af8ea89fce816b0edc70027522c6c6162a18c0b30eed" gracePeriod=10 Dec 06 09:16:07 crc kubenswrapper[4895]: I1206 09:16:07.783285 4895 generic.go:334] "Generic (PLEG): container finished" podID="39edd877-f997-45cd-9033-59e390656ef9" containerID="edf5e39571b1db843221af8ea89fce816b0edc70027522c6c6162a18c0b30eed" exitCode=0 Dec 06 09:16:07 crc kubenswrapper[4895]: I1206 09:16:07.783327 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" event={"ID":"39edd877-f997-45cd-9033-59e390656ef9","Type":"ContainerDied","Data":"edf5e39571b1db843221af8ea89fce816b0edc70027522c6c6162a18c0b30eed"} Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.226489 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334127 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-sb\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334173 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-cell1\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334230 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-dns-svc\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-config\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334284 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-networker\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334347 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-nb\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.334435 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xms7d\" (UniqueName: \"kubernetes.io/projected/39edd877-f997-45cd-9033-59e390656ef9-kube-api-access-xms7d\") pod \"39edd877-f997-45cd-9033-59e390656ef9\" (UID: \"39edd877-f997-45cd-9033-59e390656ef9\") " Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.365764 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39edd877-f997-45cd-9033-59e390656ef9-kube-api-access-xms7d" (OuterVolumeSpecName: "kube-api-access-xms7d") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "kube-api-access-xms7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.409116 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.409926 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-config" (OuterVolumeSpecName: "config") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.417741 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-networker" (OuterVolumeSpecName: "openstack-networker") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "openstack-networker". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.420127 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.424651 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.434389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "39edd877-f997-45cd-9033-59e390656ef9" (UID: "39edd877-f997-45cd-9033-59e390656ef9"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437547 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xms7d\" (UniqueName: \"kubernetes.io/projected/39edd877-f997-45cd-9033-59e390656ef9-kube-api-access-xms7d\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437573 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437581 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437591 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437600 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-config\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437608 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-openstack-networker\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.437616 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39edd877-f997-45cd-9033-59e390656ef9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.794126 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" event={"ID":"39edd877-f997-45cd-9033-59e390656ef9","Type":"ContainerDied","Data":"382e24111b22785652965269a323d02b5adee0ed84b87e9a80bba46f3472acda"} Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.794179 4895 scope.go:117] "RemoveContainer" containerID="edf5e39571b1db843221af8ea89fce816b0edc70027522c6c6162a18c0b30eed" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.794179 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdc4bcf9-2gjfd" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.826854 4895 scope.go:117] "RemoveContainer" containerID="791c0c2cd4da1f92190750172fcfc4c3b7dae3548a1984facfe6231b921dad32" Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.827755 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bdc4bcf9-2gjfd"] Dec 06 09:16:08 crc kubenswrapper[4895]: I1206 09:16:08.836395 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bdc4bcf9-2gjfd"] Dec 06 09:16:10 crc kubenswrapper[4895]: I1206 09:16:10.062972 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39edd877-f997-45cd-9033-59e390656ef9" path="/var/lib/kubelet/pods/39edd877-f997-45cd-9033-59e390656ef9/volumes" Dec 06 09:16:18 crc kubenswrapper[4895]: I1206 09:16:18.849320 4895 scope.go:117] "RemoveContainer" containerID="4b85317d9cd87b7c550451ad40f3db5b296dc62ebe5071eae958fca013602a35" Dec 06 09:16:18 crc kubenswrapper[4895]: I1206 09:16:18.884102 4895 scope.go:117] "RemoveContainer" containerID="9b7462671cfcd74edba7ca2390b3db2e7054326163f59bc53573ccbc61bf0ba5" Dec 06 09:16:18 crc kubenswrapper[4895]: I1206 09:16:18.970752 4895 scope.go:117] "RemoveContainer" containerID="90fc01dbdea20cc6705f742250a58cd8fef3ba87d88a3ea51fb37a64f6765854" Dec 06 09:16:19 crc kubenswrapper[4895]: I1206 09:16:19.007143 4895 scope.go:117] "RemoveContainer" containerID="e5723c7657c3008289be5672fa31c24d62b9455a03113581c116048fe40c2f4c" Dec 06 09:16:19 crc kubenswrapper[4895]: I1206 09:16:19.044294 4895 scope.go:117] "RemoveContainer" containerID="5ed21c0b6445c91be94ac09f40e6f7c3248b494cbfe247c3730c1d96363ed040" Dec 06 09:16:19 crc kubenswrapper[4895]: I1206 09:16:19.096676 4895 scope.go:117] "RemoveContainer" containerID="fafa6883662a6b3edb0e2c43c2b4b75c411533d75c27c1988946ebd3311440e9" Dec 06 09:16:19 crc kubenswrapper[4895]: I1206 09:16:19.134275 4895 scope.go:117] "RemoveContainer" containerID="714a7ca166d4072815eb288e46f512ddc94e2a7a3b34758973cd84dcab073eb0" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.061205 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5m5mq"] Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.075985 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5m5mq"] Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.902833 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj"] Dec 06 09:16:21 crc kubenswrapper[4895]: E1206 09:16:21.903370 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39edd877-f997-45cd-9033-59e390656ef9" containerName="dnsmasq-dns" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.903392 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="39edd877-f997-45cd-9033-59e390656ef9" containerName="dnsmasq-dns" Dec 06 09:16:21 crc kubenswrapper[4895]: E1206 09:16:21.903411 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39edd877-f997-45cd-9033-59e390656ef9" containerName="init" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.903419 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="39edd877-f997-45cd-9033-59e390656ef9" containerName="init" Dec 06 09:16:21 crc kubenswrapper[4895]: E1206 09:16:21.903454 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerName="dnsmasq-dns" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.903462 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerName="dnsmasq-dns" Dec 06 09:16:21 crc kubenswrapper[4895]: E1206 09:16:21.903498 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerName="init" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.903507 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerName="init" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.903809 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="39edd877-f997-45cd-9033-59e390656ef9" containerName="dnsmasq-dns" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.903843 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9751d574-27fb-414c-bfc1-c3c22ddf675d" containerName="dnsmasq-dns" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.904805 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.906763 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.906992 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.907075 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.911631 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.914557 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m"] Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.916215 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.918367 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.918380 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.944762 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m"] Dec 06 09:16:21 crc kubenswrapper[4895]: I1206 09:16:21.954886 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj"] Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.016890 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017040 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017365 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017542 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017624 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjqsj\" (UniqueName: \"kubernetes.io/projected/cfd809c7-a514-4771-bd5c-1e327cddfd8a-kube-api-access-zjqsj\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017679 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqz9x\" (UniqueName: \"kubernetes.io/projected/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-kube-api-access-sqz9x\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017740 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.017854 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.032690 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrqc6"] Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.041095 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mrqc6"] Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.064231 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a0d047-940d-4594-aff1-4a8e67fe8fdc" path="/var/lib/kubelet/pods/58a0d047-940d-4594-aff1-4a8e67fe8fdc/volumes" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.065576 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edbfe07-8c96-4516-9eb4-e499e9a060f2" path="/var/lib/kubelet/pods/6edbfe07-8c96-4516-9eb4-e499e9a060f2/volumes" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120076 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120502 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120603 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120667 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjqsj\" (UniqueName: \"kubernetes.io/projected/cfd809c7-a514-4771-bd5c-1e327cddfd8a-kube-api-access-zjqsj\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.120708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqz9x\" (UniqueName: \"kubernetes.io/projected/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-kube-api-access-sqz9x\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.126452 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.126909 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.127015 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.127602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.140560 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.141337 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.141570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.151138 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjqsj\" (UniqueName: \"kubernetes.io/projected/cfd809c7-a514-4771-bd5c-1e327cddfd8a-kube-api-access-zjqsj\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.155719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqz9x\" (UniqueName: \"kubernetes.io/projected/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-kube-api-access-sqz9x\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.225891 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.241815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.894992 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m"] Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.957253 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" event={"ID":"cfd809c7-a514-4771-bd5c-1e327cddfd8a","Type":"ContainerStarted","Data":"dceeb92c974c69d713774effeab93388841813b2520c253c6cdae4ba6d18740f"} Dec 06 09:16:22 crc kubenswrapper[4895]: I1206 09:16:22.977853 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj"] Dec 06 09:16:23 crc kubenswrapper[4895]: I1206 09:16:23.970889 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" event={"ID":"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8","Type":"ContainerStarted","Data":"d32ff691fb3e5c24be2eaf72989356ff961d695fa7299e7952c25d7d3c39992e"} Dec 06 09:16:37 crc kubenswrapper[4895]: I1206 09:16:37.334907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" event={"ID":"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8","Type":"ContainerStarted","Data":"0e434e66a2a12e7eae7b80537b5319d689e42a1212cee7a5e924e09dd4b9194b"} Dec 06 09:16:37 crc kubenswrapper[4895]: I1206 09:16:37.338285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" event={"ID":"cfd809c7-a514-4771-bd5c-1e327cddfd8a","Type":"ContainerStarted","Data":"1664f5835e34b3d6aa3e10d94c6845177bdf1d6054097c3b37ee12639c934c12"} Dec 06 09:16:37 crc kubenswrapper[4895]: I1206 09:16:37.380809 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" podStartSLOduration=3.7977517929999998 podStartE2EDuration="16.380787883s" podCreationTimestamp="2025-12-06 09:16:21 +0000 UTC" firstStartedPulling="2025-12-06 09:16:22.906070131 +0000 UTC m=+8345.307459011" lastFinishedPulling="2025-12-06 09:16:35.489106231 +0000 UTC m=+8357.890495101" observedRunningTime="2025-12-06 09:16:37.379034526 +0000 UTC m=+8359.780423396" watchObservedRunningTime="2025-12-06 09:16:37.380787883 +0000 UTC m=+8359.782176753" Dec 06 09:16:37 crc kubenswrapper[4895]: I1206 09:16:37.382440 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" podStartSLOduration=3.80987419 podStartE2EDuration="16.382431557s" podCreationTimestamp="2025-12-06 09:16:21 +0000 UTC" firstStartedPulling="2025-12-06 09:16:22.982836024 +0000 UTC m=+8345.384224904" lastFinishedPulling="2025-12-06 09:16:35.555393401 +0000 UTC m=+8357.956782271" observedRunningTime="2025-12-06 09:16:37.358549926 +0000 UTC m=+8359.759938796" watchObservedRunningTime="2025-12-06 09:16:37.382431557 +0000 UTC m=+8359.783820427" Dec 06 09:16:41 crc kubenswrapper[4895]: I1206 09:16:41.058769 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh7zd"] Dec 06 09:16:41 crc kubenswrapper[4895]: I1206 09:16:41.069289 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh7zd"] Dec 06 09:16:42 crc kubenswrapper[4895]: I1206 09:16:42.065070 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb73be7-9428-40ee-826b-f1af8a0c1838" path="/var/lib/kubelet/pods/1cb73be7-9428-40ee-826b-f1af8a0c1838/volumes" Dec 06 09:16:47 crc kubenswrapper[4895]: I1206 09:16:47.431587 4895 generic.go:334] "Generic (PLEG): container finished" podID="cfd809c7-a514-4771-bd5c-1e327cddfd8a" containerID="1664f5835e34b3d6aa3e10d94c6845177bdf1d6054097c3b37ee12639c934c12" exitCode=0 Dec 06 09:16:47 crc kubenswrapper[4895]: I1206 09:16:47.431708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" event={"ID":"cfd809c7-a514-4771-bd5c-1e327cddfd8a","Type":"ContainerDied","Data":"1664f5835e34b3d6aa3e10d94c6845177bdf1d6054097c3b37ee12639c934c12"} Dec 06 09:16:47 crc kubenswrapper[4895]: I1206 09:16:47.434520 4895 generic.go:334] "Generic (PLEG): container finished" podID="0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" containerID="0e434e66a2a12e7eae7b80537b5319d689e42a1212cee7a5e924e09dd4b9194b" exitCode=0 Dec 06 09:16:47 crc kubenswrapper[4895]: I1206 09:16:47.434566 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" event={"ID":"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8","Type":"ContainerDied","Data":"0e434e66a2a12e7eae7b80537b5319d689e42a1212cee7a5e924e09dd4b9194b"} Dec 06 09:16:48 crc kubenswrapper[4895]: I1206 09:16:48.916767 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.024685 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ssh-key\") pod \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.024870 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-pre-adoption-validation-combined-ca-bundle\") pod \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.024951 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqz9x\" (UniqueName: \"kubernetes.io/projected/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-kube-api-access-sqz9x\") pod \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.025014 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-inventory\") pod \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.025072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ceph\") pod \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\" (UID: \"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.030121 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ceph" (OuterVolumeSpecName: "ceph") pod "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" (UID: "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.030347 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-kube-api-access-sqz9x" (OuterVolumeSpecName: "kube-api-access-sqz9x") pod "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" (UID: "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8"). InnerVolumeSpecName "kube-api-access-sqz9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.032049 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" (UID: "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.052552 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" (UID: "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.053004 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-inventory" (OuterVolumeSpecName: "inventory") pod "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" (UID: "0ccc31a6-65f8-4e38-b5c2-6d817a8508f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.127667 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqz9x\" (UniqueName: \"kubernetes.io/projected/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-kube-api-access-sqz9x\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.127699 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.127710 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.127720 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.127731 4895 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccc31a6-65f8-4e38-b5c2-6d817a8508f8-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.454409 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.454411 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj" event={"ID":"0ccc31a6-65f8-4e38-b5c2-6d817a8508f8","Type":"ContainerDied","Data":"d32ff691fb3e5c24be2eaf72989356ff961d695fa7299e7952c25d7d3c39992e"} Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.454469 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32ff691fb3e5c24be2eaf72989356ff961d695fa7299e7952c25d7d3c39992e" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.457173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" event={"ID":"cfd809c7-a514-4771-bd5c-1e327cddfd8a","Type":"ContainerDied","Data":"dceeb92c974c69d713774effeab93388841813b2520c253c6cdae4ba6d18740f"} Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.457223 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dceeb92c974c69d713774effeab93388841813b2520c253c6cdae4ba6d18740f" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.458651 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.535887 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-pre-adoption-validation-combined-ca-bundle\") pod \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.536993 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjqsj\" (UniqueName: \"kubernetes.io/projected/cfd809c7-a514-4771-bd5c-1e327cddfd8a-kube-api-access-zjqsj\") pod \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.537159 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-inventory\") pod \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.537306 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-ssh-key\") pod \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\" (UID: \"cfd809c7-a514-4771-bd5c-1e327cddfd8a\") " Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.539261 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "cfd809c7-a514-4771-bd5c-1e327cddfd8a" (UID: "cfd809c7-a514-4771-bd5c-1e327cddfd8a"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.545571 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd809c7-a514-4771-bd5c-1e327cddfd8a-kube-api-access-zjqsj" (OuterVolumeSpecName: "kube-api-access-zjqsj") pod "cfd809c7-a514-4771-bd5c-1e327cddfd8a" (UID: "cfd809c7-a514-4771-bd5c-1e327cddfd8a"). InnerVolumeSpecName "kube-api-access-zjqsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.566163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-inventory" (OuterVolumeSpecName: "inventory") pod "cfd809c7-a514-4771-bd5c-1e327cddfd8a" (UID: "cfd809c7-a514-4771-bd5c-1e327cddfd8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.582146 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cfd809c7-a514-4771-bd5c-1e327cddfd8a" (UID: "cfd809c7-a514-4771-bd5c-1e327cddfd8a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.641005 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.641049 4895 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.641061 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjqsj\" (UniqueName: \"kubernetes.io/projected/cfd809c7-a514-4771-bd5c-1e327cddfd8a-kube-api-access-zjqsj\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:49 crc kubenswrapper[4895]: I1206 09:16:49.641071 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfd809c7-a514-4771-bd5c-1e327cddfd8a-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:16:50 crc kubenswrapper[4895]: I1206 09:16:50.469016 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.822556 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv"] Dec 06 09:16:54 crc kubenswrapper[4895]: E1206 09:16:54.823675 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.823695 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 06 09:16:54 crc kubenswrapper[4895]: E1206 09:16:54.823726 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd809c7-a514-4771-bd5c-1e327cddfd8a" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.823735 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd809c7-a514-4771-bd5c-1e327cddfd8a" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.823996 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ccc31a6-65f8-4e38-b5c2-6d817a8508f8" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.824039 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd809c7-a514-4771-bd5c-1e327cddfd8a" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.825039 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.829238 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.829493 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.829662 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.829798 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.838786 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv"] Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.840319 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.843397 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.843619 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.875734 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv"] Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.886509 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv"] Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970496 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8rml\" (UniqueName: \"kubernetes.io/projected/710bdda9-c040-4731-b0cf-dce648cb6c9e-kube-api-access-z8rml\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970562 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970626 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970658 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrtkn\" (UniqueName: \"kubernetes.io/projected/7108ac74-5da3-451e-811b-384e786863ec-kube-api-access-nrtkn\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970729 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.970910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:54 crc kubenswrapper[4895]: I1206 09:16:54.971063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073288 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073371 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073403 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073460 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073504 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8rml\" (UniqueName: \"kubernetes.io/projected/710bdda9-c040-4731-b0cf-dce648cb6c9e-kube-api-access-z8rml\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073567 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.073625 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrtkn\" (UniqueName: \"kubernetes.io/projected/7108ac74-5da3-451e-811b-384e786863ec-kube-api-access-nrtkn\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.080983 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.081433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.081819 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.082328 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.089998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.091161 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.091860 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8rml\" (UniqueName: \"kubernetes.io/projected/710bdda9-c040-4731-b0cf-dce648cb6c9e-kube-api-access-z8rml\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.092277 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.096737 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrtkn\" (UniqueName: \"kubernetes.io/projected/7108ac74-5da3-451e-811b-384e786863ec-kube-api-access-nrtkn\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.155890 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.170504 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.726333 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv"] Dec 06 09:16:55 crc kubenswrapper[4895]: I1206 09:16:55.809380 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv"] Dec 06 09:16:55 crc kubenswrapper[4895]: W1206 09:16:55.810958 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod710bdda9_c040_4731_b0cf_dce648cb6c9e.slice/crio-1a394951e641f08193473cf243a276f61212e7f319ecd7561f8faf59ef04a73e WatchSource:0}: Error finding container 1a394951e641f08193473cf243a276f61212e7f319ecd7561f8faf59ef04a73e: Status 404 returned error can't find the container with id 1a394951e641f08193473cf243a276f61212e7f319ecd7561f8faf59ef04a73e Dec 06 09:16:56 crc kubenswrapper[4895]: I1206 09:16:56.533509 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" event={"ID":"710bdda9-c040-4731-b0cf-dce648cb6c9e","Type":"ContainerStarted","Data":"91a1a2d28f6084e8f6d67770b952675de730b747c57eed24ceb02be2bd4ae7e5"} Dec 06 09:16:56 crc kubenswrapper[4895]: I1206 09:16:56.533799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" event={"ID":"710bdda9-c040-4731-b0cf-dce648cb6c9e","Type":"ContainerStarted","Data":"1a394951e641f08193473cf243a276f61212e7f319ecd7561f8faf59ef04a73e"} Dec 06 09:16:56 crc kubenswrapper[4895]: I1206 09:16:56.534952 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" event={"ID":"7108ac74-5da3-451e-811b-384e786863ec","Type":"ContainerStarted","Data":"18837d8027bf9702122ef14262bf36606983fd70e7957e8a5a4359ff9ab8bb56"} Dec 06 09:16:56 crc kubenswrapper[4895]: I1206 09:16:56.534978 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" event={"ID":"7108ac74-5da3-451e-811b-384e786863ec","Type":"ContainerStarted","Data":"1725995c0bffd8783f4321d5bc0629c126ef398d544de33378921ae4ce5fd37b"} Dec 06 09:16:56 crc kubenswrapper[4895]: I1206 09:16:56.549226 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" podStartSLOduration=2.105467057 podStartE2EDuration="2.549208579s" podCreationTimestamp="2025-12-06 09:16:54 +0000 UTC" firstStartedPulling="2025-12-06 09:16:55.815088076 +0000 UTC m=+8378.216476946" lastFinishedPulling="2025-12-06 09:16:56.258829598 +0000 UTC m=+8378.660218468" observedRunningTime="2025-12-06 09:16:56.54887273 +0000 UTC m=+8378.950261610" watchObservedRunningTime="2025-12-06 09:16:56.549208579 +0000 UTC m=+8378.950597449" Dec 06 09:16:56 crc kubenswrapper[4895]: I1206 09:16:56.583582 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" podStartSLOduration=2.186141395 podStartE2EDuration="2.583561102s" podCreationTimestamp="2025-12-06 09:16:54 +0000 UTC" firstStartedPulling="2025-12-06 09:16:55.745667061 +0000 UTC m=+8378.147055941" lastFinishedPulling="2025-12-06 09:16:56.143086738 +0000 UTC m=+8378.544475648" observedRunningTime="2025-12-06 09:16:56.575394043 +0000 UTC m=+8378.976782943" watchObservedRunningTime="2025-12-06 09:16:56.583561102 +0000 UTC m=+8378.984949972" Dec 06 09:17:19 crc kubenswrapper[4895]: I1206 09:17:19.337878 4895 scope.go:117] "RemoveContainer" containerID="ed4a8d5a4d75eab2c5dc35b2869a835da7cecba5dd0c664cd363e91c056df55f" Dec 06 09:17:19 crc kubenswrapper[4895]: I1206 09:17:19.409450 4895 scope.go:117] "RemoveContainer" containerID="bd19d9c1e03905df237163a2c9568fef61091a12b6e2f8ab72035c50018f8279" Dec 06 09:17:19 crc kubenswrapper[4895]: I1206 09:17:19.448598 4895 scope.go:117] "RemoveContainer" containerID="03170f751ebc5a56f7e888d174a47cbb75edd0936a2e70c214be74c650bbb650" Dec 06 09:17:24 crc kubenswrapper[4895]: I1206 09:17:24.092403 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-899b-account-create-update-67t5l"] Dec 06 09:17:24 crc kubenswrapper[4895]: I1206 09:17:24.095003 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kjp4n"] Dec 06 09:17:24 crc kubenswrapper[4895]: I1206 09:17:24.109335 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-899b-account-create-update-67t5l"] Dec 06 09:17:24 crc kubenswrapper[4895]: I1206 09:17:24.124338 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kjp4n"] Dec 06 09:17:26 crc kubenswrapper[4895]: I1206 09:17:26.068442 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19aaae59-6595-4164-8f28-f2bec39e3b96" path="/var/lib/kubelet/pods/19aaae59-6595-4164-8f28-f2bec39e3b96/volumes" Dec 06 09:17:26 crc kubenswrapper[4895]: I1206 09:17:26.069458 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27528cdd-37d6-487a-9987-37d2b13a199c" path="/var/lib/kubelet/pods/27528cdd-37d6-487a-9987-37d2b13a199c/volumes" Dec 06 09:17:59 crc kubenswrapper[4895]: I1206 09:17:59.695849 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:17:59 crc kubenswrapper[4895]: I1206 09:17:59.696395 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:18:05 crc kubenswrapper[4895]: I1206 09:18:05.055976 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wmtqd"] Dec 06 09:18:05 crc kubenswrapper[4895]: I1206 09:18:05.068318 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wmtqd"] Dec 06 09:18:06 crc kubenswrapper[4895]: I1206 09:18:06.073382 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38c8f0e-1d97-46dd-bb5b-5468398bad0e" path="/var/lib/kubelet/pods/b38c8f0e-1d97-46dd-bb5b-5468398bad0e/volumes" Dec 06 09:18:19 crc kubenswrapper[4895]: I1206 09:18:19.574677 4895 scope.go:117] "RemoveContainer" containerID="605f5340b95cd9633471d379d4b58f982c22a3b9b26db2afb2e8bbffb105500e" Dec 06 09:18:19 crc kubenswrapper[4895]: I1206 09:18:19.602664 4895 scope.go:117] "RemoveContainer" containerID="375f325bc2758a9f3ccf06aec709f32be17ed32dc88720e8941e73a7af9f77b3" Dec 06 09:18:19 crc kubenswrapper[4895]: I1206 09:18:19.658315 4895 scope.go:117] "RemoveContainer" containerID="bf0f8147065ca93fc41e855a31bbe67377c7612cb12d605c0f53665c890bf45a" Dec 06 09:18:29 crc kubenswrapper[4895]: I1206 09:18:29.696302 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:18:29 crc kubenswrapper[4895]: I1206 09:18:29.696829 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.443572 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6pk72"] Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.448074 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.463146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6pk72"] Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.504215 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzft\" (UniqueName: \"kubernetes.io/projected/d6077fa1-54ec-4496-bb1b-5c32eec93684-kube-api-access-fwzft\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.504421 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-catalog-content\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.504613 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-utilities\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.607194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-catalog-content\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.607467 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-utilities\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.607644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwzft\" (UniqueName: \"kubernetes.io/projected/d6077fa1-54ec-4496-bb1b-5c32eec93684-kube-api-access-fwzft\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.607789 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-catalog-content\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.607827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-utilities\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.642803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwzft\" (UniqueName: \"kubernetes.io/projected/d6077fa1-54ec-4496-bb1b-5c32eec93684-kube-api-access-fwzft\") pod \"redhat-operators-6pk72\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:39 crc kubenswrapper[4895]: I1206 09:18:39.802819 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:40 crc kubenswrapper[4895]: I1206 09:18:40.286856 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6pk72"] Dec 06 09:18:40 crc kubenswrapper[4895]: I1206 09:18:40.819642 4895 generic.go:334] "Generic (PLEG): container finished" podID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerID="339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc" exitCode=0 Dec 06 09:18:40 crc kubenswrapper[4895]: I1206 09:18:40.819711 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerDied","Data":"339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc"} Dec 06 09:18:40 crc kubenswrapper[4895]: I1206 09:18:40.820070 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerStarted","Data":"e719036e030fe01e227bad29bbe49ba053c5ffa17c8ea4ab78a33254f0ba08b1"} Dec 06 09:18:40 crc kubenswrapper[4895]: I1206 09:18:40.821947 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:18:41 crc kubenswrapper[4895]: I1206 09:18:41.830501 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerStarted","Data":"ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c"} Dec 06 09:18:46 crc kubenswrapper[4895]: I1206 09:18:46.887077 4895 generic.go:334] "Generic (PLEG): container finished" podID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerID="ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c" exitCode=0 Dec 06 09:18:46 crc kubenswrapper[4895]: I1206 09:18:46.887168 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerDied","Data":"ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c"} Dec 06 09:18:47 crc kubenswrapper[4895]: I1206 09:18:47.902154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerStarted","Data":"4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0"} Dec 06 09:18:47 crc kubenswrapper[4895]: I1206 09:18:47.922888 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6pk72" podStartSLOduration=2.445407792 podStartE2EDuration="8.922857789s" podCreationTimestamp="2025-12-06 09:18:39 +0000 UTC" firstStartedPulling="2025-12-06 09:18:40.821676954 +0000 UTC m=+8483.223065824" lastFinishedPulling="2025-12-06 09:18:47.299126951 +0000 UTC m=+8489.700515821" observedRunningTime="2025-12-06 09:18:47.920863185 +0000 UTC m=+8490.322252055" watchObservedRunningTime="2025-12-06 09:18:47.922857789 +0000 UTC m=+8490.324246659" Dec 06 09:18:49 crc kubenswrapper[4895]: I1206 09:18:49.803340 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:49 crc kubenswrapper[4895]: I1206 09:18:49.803893 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:18:50 crc kubenswrapper[4895]: I1206 09:18:50.856626 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6pk72" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="registry-server" probeResult="failure" output=< Dec 06 09:18:50 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:18:50 crc kubenswrapper[4895]: > Dec 06 09:18:59 crc kubenswrapper[4895]: I1206 09:18:59.695885 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:18:59 crc kubenswrapper[4895]: I1206 09:18:59.696443 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:18:59 crc kubenswrapper[4895]: I1206 09:18:59.696502 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:18:59 crc kubenswrapper[4895]: I1206 09:18:59.697328 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25a2f0f5587d8e1cb90d91095a50faeb855712cd694f679babd607bf56b409df"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:18:59 crc kubenswrapper[4895]: I1206 09:18:59.697381 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://25a2f0f5587d8e1cb90d91095a50faeb855712cd694f679babd607bf56b409df" gracePeriod=600 Dec 06 09:19:00 crc kubenswrapper[4895]: I1206 09:19:00.026334 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="25a2f0f5587d8e1cb90d91095a50faeb855712cd694f679babd607bf56b409df" exitCode=0 Dec 06 09:19:00 crc kubenswrapper[4895]: I1206 09:19:00.026415 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"25a2f0f5587d8e1cb90d91095a50faeb855712cd694f679babd607bf56b409df"} Dec 06 09:19:00 crc kubenswrapper[4895]: I1206 09:19:00.026779 4895 scope.go:117] "RemoveContainer" containerID="ca7a8f6209558d82a58263bbf10b784273ac777d943fe063ebb4202db222fbef" Dec 06 09:19:00 crc kubenswrapper[4895]: I1206 09:19:00.848350 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6pk72" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="registry-server" probeResult="failure" output=< Dec 06 09:19:00 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:19:00 crc kubenswrapper[4895]: > Dec 06 09:19:01 crc kubenswrapper[4895]: I1206 09:19:01.040395 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f"} Dec 06 09:19:09 crc kubenswrapper[4895]: I1206 09:19:09.855726 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:19:09 crc kubenswrapper[4895]: I1206 09:19:09.914258 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:19:10 crc kubenswrapper[4895]: I1206 09:19:10.651975 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6pk72"] Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.136713 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6pk72" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="registry-server" containerID="cri-o://4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0" gracePeriod=2 Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.660020 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.714515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwzft\" (UniqueName: \"kubernetes.io/projected/d6077fa1-54ec-4496-bb1b-5c32eec93684-kube-api-access-fwzft\") pod \"d6077fa1-54ec-4496-bb1b-5c32eec93684\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.714775 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-utilities\") pod \"d6077fa1-54ec-4496-bb1b-5c32eec93684\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.714874 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-catalog-content\") pod \"d6077fa1-54ec-4496-bb1b-5c32eec93684\" (UID: \"d6077fa1-54ec-4496-bb1b-5c32eec93684\") " Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.715816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-utilities" (OuterVolumeSpecName: "utilities") pod "d6077fa1-54ec-4496-bb1b-5c32eec93684" (UID: "d6077fa1-54ec-4496-bb1b-5c32eec93684"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.719836 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6077fa1-54ec-4496-bb1b-5c32eec93684-kube-api-access-fwzft" (OuterVolumeSpecName: "kube-api-access-fwzft") pod "d6077fa1-54ec-4496-bb1b-5c32eec93684" (UID: "d6077fa1-54ec-4496-bb1b-5c32eec93684"). InnerVolumeSpecName "kube-api-access-fwzft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.725712 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwzft\" (UniqueName: \"kubernetes.io/projected/d6077fa1-54ec-4496-bb1b-5c32eec93684-kube-api-access-fwzft\") on node \"crc\" DevicePath \"\"" Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.725758 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.814994 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6077fa1-54ec-4496-bb1b-5c32eec93684" (UID: "d6077fa1-54ec-4496-bb1b-5c32eec93684"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:19:11 crc kubenswrapper[4895]: I1206 09:19:11.828044 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6077fa1-54ec-4496-bb1b-5c32eec93684-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.147944 4895 generic.go:334] "Generic (PLEG): container finished" podID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerID="4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0" exitCode=0 Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.148028 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerDied","Data":"4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0"} Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.148315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6pk72" event={"ID":"d6077fa1-54ec-4496-bb1b-5c32eec93684","Type":"ContainerDied","Data":"e719036e030fe01e227bad29bbe49ba053c5ffa17c8ea4ab78a33254f0ba08b1"} Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.148341 4895 scope.go:117] "RemoveContainer" containerID="4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.148043 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6pk72" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.171796 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6pk72"] Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.183270 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6pk72"] Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.188634 4895 scope.go:117] "RemoveContainer" containerID="ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.225361 4895 scope.go:117] "RemoveContainer" containerID="339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.275657 4895 scope.go:117] "RemoveContainer" containerID="4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0" Dec 06 09:19:12 crc kubenswrapper[4895]: E1206 09:19:12.276202 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0\": container with ID starting with 4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0 not found: ID does not exist" containerID="4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.276381 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0"} err="failed to get container status \"4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0\": rpc error: code = NotFound desc = could not find container \"4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0\": container with ID starting with 4105dfbc4cba087f2260f887d423384f7f0261aa4227e51b6659a0c2f506a8b0 not found: ID does not exist" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.276547 4895 scope.go:117] "RemoveContainer" containerID="ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c" Dec 06 09:19:12 crc kubenswrapper[4895]: E1206 09:19:12.277096 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c\": container with ID starting with ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c not found: ID does not exist" containerID="ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.277118 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c"} err="failed to get container status \"ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c\": rpc error: code = NotFound desc = could not find container \"ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c\": container with ID starting with ac9dc0d7382cebf3461a9a15337b9add77c014cb426a95010a85898db6ea2c9c not found: ID does not exist" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.277131 4895 scope.go:117] "RemoveContainer" containerID="339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc" Dec 06 09:19:12 crc kubenswrapper[4895]: E1206 09:19:12.277449 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc\": container with ID starting with 339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc not found: ID does not exist" containerID="339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc" Dec 06 09:19:12 crc kubenswrapper[4895]: I1206 09:19:12.277565 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc"} err="failed to get container status \"339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc\": rpc error: code = NotFound desc = could not find container \"339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc\": container with ID starting with 339a75c213b114a74eef645e0b0f32fcd0fa3cda014501593e670ad4b2e930dc not found: ID does not exist" Dec 06 09:19:14 crc kubenswrapper[4895]: I1206 09:19:14.072553 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" path="/var/lib/kubelet/pods/d6077fa1-54ec-4496-bb1b-5c32eec93684/volumes" Dec 06 09:21:19 crc kubenswrapper[4895]: I1206 09:21:19.604367 4895 generic.go:334] "Generic (PLEG): container finished" podID="7108ac74-5da3-451e-811b-384e786863ec" containerID="18837d8027bf9702122ef14262bf36606983fd70e7957e8a5a4359ff9ab8bb56" exitCode=0 Dec 06 09:21:19 crc kubenswrapper[4895]: I1206 09:21:19.604532 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" event={"ID":"7108ac74-5da3-451e-811b-384e786863ec","Type":"ContainerDied","Data":"18837d8027bf9702122ef14262bf36606983fd70e7957e8a5a4359ff9ab8bb56"} Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.104615 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.239289 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-ssh-key\") pod \"7108ac74-5da3-451e-811b-384e786863ec\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.239368 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-tripleo-cleanup-combined-ca-bundle\") pod \"7108ac74-5da3-451e-811b-384e786863ec\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.239404 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrtkn\" (UniqueName: \"kubernetes.io/projected/7108ac74-5da3-451e-811b-384e786863ec-kube-api-access-nrtkn\") pod \"7108ac74-5da3-451e-811b-384e786863ec\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.239601 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-inventory\") pod \"7108ac74-5da3-451e-811b-384e786863ec\" (UID: \"7108ac74-5da3-451e-811b-384e786863ec\") " Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.264768 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7108ac74-5da3-451e-811b-384e786863ec-kube-api-access-nrtkn" (OuterVolumeSpecName: "kube-api-access-nrtkn") pod "7108ac74-5da3-451e-811b-384e786863ec" (UID: "7108ac74-5da3-451e-811b-384e786863ec"). InnerVolumeSpecName "kube-api-access-nrtkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.273597 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "7108ac74-5da3-451e-811b-384e786863ec" (UID: "7108ac74-5da3-451e-811b-384e786863ec"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.293833 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-inventory" (OuterVolumeSpecName: "inventory") pod "7108ac74-5da3-451e-811b-384e786863ec" (UID: "7108ac74-5da3-451e-811b-384e786863ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.310863 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7108ac74-5da3-451e-811b-384e786863ec" (UID: "7108ac74-5da3-451e-811b-384e786863ec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.344007 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.344053 4895 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.344069 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrtkn\" (UniqueName: \"kubernetes.io/projected/7108ac74-5da3-451e-811b-384e786863ec-kube-api-access-nrtkn\") on node \"crc\" DevicePath \"\"" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.344084 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7108ac74-5da3-451e-811b-384e786863ec-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.647561 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" event={"ID":"7108ac74-5da3-451e-811b-384e786863ec","Type":"ContainerDied","Data":"1725995c0bffd8783f4321d5bc0629c126ef398d544de33378921ae4ce5fd37b"} Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.647720 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1725995c0bffd8783f4321d5bc0629c126ef398d544de33378921ae4ce5fd37b" Dec 06 09:21:21 crc kubenswrapper[4895]: I1206 09:21:21.647934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv" Dec 06 09:21:29 crc kubenswrapper[4895]: I1206 09:21:29.043922 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-1a7e-account-create-update-lgpss"] Dec 06 09:21:29 crc kubenswrapper[4895]: I1206 09:21:29.053182 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-pmnxk"] Dec 06 09:21:29 crc kubenswrapper[4895]: I1206 09:21:29.062987 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-1a7e-account-create-update-lgpss"] Dec 06 09:21:29 crc kubenswrapper[4895]: I1206 09:21:29.071262 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-pmnxk"] Dec 06 09:21:29 crc kubenswrapper[4895]: I1206 09:21:29.695307 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:21:29 crc kubenswrapper[4895]: I1206 09:21:29.695713 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:21:30 crc kubenswrapper[4895]: I1206 09:21:30.071974 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc2415b2-e46f-4d36-8433-ffcc83f63db8" path="/var/lib/kubelet/pods/dc2415b2-e46f-4d36-8433-ffcc83f63db8/volumes" Dec 06 09:21:30 crc kubenswrapper[4895]: I1206 09:21:30.073744 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddc29b8-1da4-40de-9bd3-076f4276f53d" path="/var/lib/kubelet/pods/dddc29b8-1da4-40de-9bd3-076f4276f53d/volumes" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.988274 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-znbqv"] Dec 06 09:21:45 crc kubenswrapper[4895]: E1206 09:21:45.989300 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="extract-utilities" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.989318 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="extract-utilities" Dec 06 09:21:45 crc kubenswrapper[4895]: E1206 09:21:45.989332 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="registry-server" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.989338 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="registry-server" Dec 06 09:21:45 crc kubenswrapper[4895]: E1206 09:21:45.989359 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="extract-content" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.989365 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="extract-content" Dec 06 09:21:45 crc kubenswrapper[4895]: E1206 09:21:45.989380 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7108ac74-5da3-451e-811b-384e786863ec" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.989389 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7108ac74-5da3-451e-811b-384e786863ec" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.993176 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7108ac74-5da3-451e-811b-384e786863ec" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.993216 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6077fa1-54ec-4496-bb1b-5c32eec93684" containerName="registry-server" Dec 06 09:21:45 crc kubenswrapper[4895]: I1206 09:21:45.995092 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.012407 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znbqv"] Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.156671 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-catalog-content\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.156886 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-utilities\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.157445 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldpg5\" (UniqueName: \"kubernetes.io/projected/3716e58f-1d30-424b-92dc-aeca70455b54-kube-api-access-ldpg5\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.259688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldpg5\" (UniqueName: \"kubernetes.io/projected/3716e58f-1d30-424b-92dc-aeca70455b54-kube-api-access-ldpg5\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.259803 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-catalog-content\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.259841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-utilities\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.260556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-utilities\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.261146 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-catalog-content\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.289020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldpg5\" (UniqueName: \"kubernetes.io/projected/3716e58f-1d30-424b-92dc-aeca70455b54-kube-api-access-ldpg5\") pod \"community-operators-znbqv\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.364099 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.615761 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dqngw"] Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.618458 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.632835 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dqngw"] Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.768307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-catalog-content\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.768422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gt7h\" (UniqueName: \"kubernetes.io/projected/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-kube-api-access-4gt7h\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.768628 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-utilities\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.870311 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gt7h\" (UniqueName: \"kubernetes.io/projected/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-kube-api-access-4gt7h\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.870432 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-utilities\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.870525 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-catalog-content\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.871195 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-catalog-content\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.871236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-utilities\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.888645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gt7h\" (UniqueName: \"kubernetes.io/projected/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-kube-api-access-4gt7h\") pod \"redhat-marketplace-dqngw\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.983342 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znbqv"] Dec 06 09:21:46 crc kubenswrapper[4895]: I1206 09:21:46.983776 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.061469 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jdtpw"] Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.071680 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jdtpw"] Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.568615 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dqngw"] Dec 06 09:21:47 crc kubenswrapper[4895]: W1206 09:21:47.587755 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d7adb0_4215_4921_92b5_dd08cf3ce7cf.slice/crio-d6ad10e63c9b34613e7e12e34dd8f40ff2ab735dc12aca4d3319d95481e61bc7 WatchSource:0}: Error finding container d6ad10e63c9b34613e7e12e34dd8f40ff2ab735dc12aca4d3319d95481e61bc7: Status 404 returned error can't find the container with id d6ad10e63c9b34613e7e12e34dd8f40ff2ab735dc12aca4d3319d95481e61bc7 Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.984640 4895 generic.go:334] "Generic (PLEG): container finished" podID="3716e58f-1d30-424b-92dc-aeca70455b54" containerID="8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8" exitCode=0 Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.984703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerDied","Data":"8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8"} Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.984837 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerStarted","Data":"57c726600df5786c6545740c3a1753b512e235a5e793ca9c4bdf353f914d148e"} Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.986834 4895 generic.go:334] "Generic (PLEG): container finished" podID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerID="10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d" exitCode=0 Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.986892 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerDied","Data":"10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d"} Dec 06 09:21:47 crc kubenswrapper[4895]: I1206 09:21:47.986933 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerStarted","Data":"d6ad10e63c9b34613e7e12e34dd8f40ff2ab735dc12aca4d3319d95481e61bc7"} Dec 06 09:21:48 crc kubenswrapper[4895]: I1206 09:21:48.062339 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eaf0992-ddab-4e55-b3ad-c5b4da3c068f" path="/var/lib/kubelet/pods/7eaf0992-ddab-4e55-b3ad-c5b4da3c068f/volumes" Dec 06 09:21:49 crc kubenswrapper[4895]: I1206 09:21:49.000118 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerStarted","Data":"0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d"} Dec 06 09:21:49 crc kubenswrapper[4895]: I1206 09:21:49.002455 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerStarted","Data":"4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b"} Dec 06 09:21:50 crc kubenswrapper[4895]: I1206 09:21:50.013844 4895 generic.go:334] "Generic (PLEG): container finished" podID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerID="4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b" exitCode=0 Dec 06 09:21:50 crc kubenswrapper[4895]: I1206 09:21:50.013912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerDied","Data":"4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b"} Dec 06 09:21:51 crc kubenswrapper[4895]: I1206 09:21:51.025201 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerStarted","Data":"068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178"} Dec 06 09:21:51 crc kubenswrapper[4895]: I1206 09:21:51.027238 4895 generic.go:334] "Generic (PLEG): container finished" podID="3716e58f-1d30-424b-92dc-aeca70455b54" containerID="0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d" exitCode=0 Dec 06 09:21:51 crc kubenswrapper[4895]: I1206 09:21:51.027408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerDied","Data":"0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d"} Dec 06 09:21:51 crc kubenswrapper[4895]: I1206 09:21:51.047313 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dqngw" podStartSLOduration=2.615594597 podStartE2EDuration="5.047277589s" podCreationTimestamp="2025-12-06 09:21:46 +0000 UTC" firstStartedPulling="2025-12-06 09:21:47.988460654 +0000 UTC m=+8670.389849524" lastFinishedPulling="2025-12-06 09:21:50.420143636 +0000 UTC m=+8672.821532516" observedRunningTime="2025-12-06 09:21:51.041802719 +0000 UTC m=+8673.443191589" watchObservedRunningTime="2025-12-06 09:21:51.047277589 +0000 UTC m=+8673.448666459" Dec 06 09:21:52 crc kubenswrapper[4895]: I1206 09:21:52.040459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerStarted","Data":"5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615"} Dec 06 09:21:52 crc kubenswrapper[4895]: I1206 09:21:52.074466 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-znbqv" podStartSLOduration=3.601884702 podStartE2EDuration="7.074444611s" podCreationTimestamp="2025-12-06 09:21:45 +0000 UTC" firstStartedPulling="2025-12-06 09:21:47.987409255 +0000 UTC m=+8670.388798165" lastFinishedPulling="2025-12-06 09:21:51.459969204 +0000 UTC m=+8673.861358074" observedRunningTime="2025-12-06 09:21:52.066324269 +0000 UTC m=+8674.467713149" watchObservedRunningTime="2025-12-06 09:21:52.074444611 +0000 UTC m=+8674.475833491" Dec 06 09:21:56 crc kubenswrapper[4895]: I1206 09:21:56.364853 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:56 crc kubenswrapper[4895]: I1206 09:21:56.365433 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:56 crc kubenswrapper[4895]: I1206 09:21:56.435104 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:56 crc kubenswrapper[4895]: I1206 09:21:56.983994 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:56 crc kubenswrapper[4895]: I1206 09:21:56.984064 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:57 crc kubenswrapper[4895]: I1206 09:21:57.070607 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:57 crc kubenswrapper[4895]: I1206 09:21:57.177713 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:21:57 crc kubenswrapper[4895]: I1206 09:21:57.184867 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:58 crc kubenswrapper[4895]: I1206 09:21:58.586304 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dqngw"] Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.104891 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dqngw" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="registry-server" containerID="cri-o://068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178" gracePeriod=2 Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.657799 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.696435 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.696513 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.770887 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gt7h\" (UniqueName: \"kubernetes.io/projected/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-kube-api-access-4gt7h\") pod \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.771074 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-utilities\") pod \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.771130 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-catalog-content\") pod \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\" (UID: \"76d7adb0-4215-4921-92b5-dd08cf3ce7cf\") " Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.771936 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-utilities" (OuterVolumeSpecName: "utilities") pod "76d7adb0-4215-4921-92b5-dd08cf3ce7cf" (UID: "76d7adb0-4215-4921-92b5-dd08cf3ce7cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.772228 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.776840 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-kube-api-access-4gt7h" (OuterVolumeSpecName: "kube-api-access-4gt7h") pod "76d7adb0-4215-4921-92b5-dd08cf3ce7cf" (UID: "76d7adb0-4215-4921-92b5-dd08cf3ce7cf"). InnerVolumeSpecName "kube-api-access-4gt7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.806336 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d7adb0-4215-4921-92b5-dd08cf3ce7cf" (UID: "76d7adb0-4215-4921-92b5-dd08cf3ce7cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.874577 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gt7h\" (UniqueName: \"kubernetes.io/projected/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-kube-api-access-4gt7h\") on node \"crc\" DevicePath \"\"" Dec 06 09:21:59 crc kubenswrapper[4895]: I1206 09:21:59.874609 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7adb0-4215-4921-92b5-dd08cf3ce7cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.115426 4895 generic.go:334] "Generic (PLEG): container finished" podID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerID="068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178" exitCode=0 Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.115500 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerDied","Data":"068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178"} Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.115534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dqngw" event={"ID":"76d7adb0-4215-4921-92b5-dd08cf3ce7cf","Type":"ContainerDied","Data":"d6ad10e63c9b34613e7e12e34dd8f40ff2ab735dc12aca4d3319d95481e61bc7"} Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.115557 4895 scope.go:117] "RemoveContainer" containerID="068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.115611 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dqngw" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.145413 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dqngw"] Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.153861 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dqngw"] Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.161029 4895 scope.go:117] "RemoveContainer" containerID="4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.184522 4895 scope.go:117] "RemoveContainer" containerID="10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.234595 4895 scope.go:117] "RemoveContainer" containerID="068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178" Dec 06 09:22:00 crc kubenswrapper[4895]: E1206 09:22:00.235048 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178\": container with ID starting with 068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178 not found: ID does not exist" containerID="068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.235096 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178"} err="failed to get container status \"068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178\": rpc error: code = NotFound desc = could not find container \"068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178\": container with ID starting with 068e6c895b0f8176e04b683b4568a286a15af25f2258409ee5f55c3c843be178 not found: ID does not exist" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.235128 4895 scope.go:117] "RemoveContainer" containerID="4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b" Dec 06 09:22:00 crc kubenswrapper[4895]: E1206 09:22:00.235547 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b\": container with ID starting with 4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b not found: ID does not exist" containerID="4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.235586 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b"} err="failed to get container status \"4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b\": rpc error: code = NotFound desc = could not find container \"4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b\": container with ID starting with 4746b1a7b114cb784566680551a0ed7932bd3018d451e89d8824aa3e813c5e2b not found: ID does not exist" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.235612 4895 scope.go:117] "RemoveContainer" containerID="10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d" Dec 06 09:22:00 crc kubenswrapper[4895]: E1206 09:22:00.235933 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d\": container with ID starting with 10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d not found: ID does not exist" containerID="10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d" Dec 06 09:22:00 crc kubenswrapper[4895]: I1206 09:22:00.235985 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d"} err="failed to get container status \"10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d\": rpc error: code = NotFound desc = could not find container \"10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d\": container with ID starting with 10d5d05d836b599e0a6c4e026d6daa0736ce1e000af2d898e721d97e7933b15d not found: ID does not exist" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.176149 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znbqv"] Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.176374 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-znbqv" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="registry-server" containerID="cri-o://5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615" gracePeriod=2 Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.650163 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.714078 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldpg5\" (UniqueName: \"kubernetes.io/projected/3716e58f-1d30-424b-92dc-aeca70455b54-kube-api-access-ldpg5\") pod \"3716e58f-1d30-424b-92dc-aeca70455b54\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.714145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-catalog-content\") pod \"3716e58f-1d30-424b-92dc-aeca70455b54\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.714349 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-utilities\") pod \"3716e58f-1d30-424b-92dc-aeca70455b54\" (UID: \"3716e58f-1d30-424b-92dc-aeca70455b54\") " Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.715111 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-utilities" (OuterVolumeSpecName: "utilities") pod "3716e58f-1d30-424b-92dc-aeca70455b54" (UID: "3716e58f-1d30-424b-92dc-aeca70455b54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.720007 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3716e58f-1d30-424b-92dc-aeca70455b54-kube-api-access-ldpg5" (OuterVolumeSpecName: "kube-api-access-ldpg5") pod "3716e58f-1d30-424b-92dc-aeca70455b54" (UID: "3716e58f-1d30-424b-92dc-aeca70455b54"). InnerVolumeSpecName "kube-api-access-ldpg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.762708 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3716e58f-1d30-424b-92dc-aeca70455b54" (UID: "3716e58f-1d30-424b-92dc-aeca70455b54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.816396 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.816663 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3716e58f-1d30-424b-92dc-aeca70455b54-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:22:01 crc kubenswrapper[4895]: I1206 09:22:01.816746 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldpg5\" (UniqueName: \"kubernetes.io/projected/3716e58f-1d30-424b-92dc-aeca70455b54-kube-api-access-ldpg5\") on node \"crc\" DevicePath \"\"" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.095507 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" path="/var/lib/kubelet/pods/76d7adb0-4215-4921-92b5-dd08cf3ce7cf/volumes" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.139466 4895 generic.go:334] "Generic (PLEG): container finished" podID="3716e58f-1d30-424b-92dc-aeca70455b54" containerID="5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615" exitCode=0 Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.139556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerDied","Data":"5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615"} Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.139593 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znbqv" event={"ID":"3716e58f-1d30-424b-92dc-aeca70455b54","Type":"ContainerDied","Data":"57c726600df5786c6545740c3a1753b512e235a5e793ca9c4bdf353f914d148e"} Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.139616 4895 scope.go:117] "RemoveContainer" containerID="5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.139790 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znbqv" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.179543 4895 scope.go:117] "RemoveContainer" containerID="0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.181181 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znbqv"] Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.188009 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-znbqv"] Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.210614 4895 scope.go:117] "RemoveContainer" containerID="8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.268015 4895 scope.go:117] "RemoveContainer" containerID="5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615" Dec 06 09:22:02 crc kubenswrapper[4895]: E1206 09:22:02.268590 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615\": container with ID starting with 5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615 not found: ID does not exist" containerID="5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.268635 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615"} err="failed to get container status \"5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615\": rpc error: code = NotFound desc = could not find container \"5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615\": container with ID starting with 5e41b0cbf28853f5ab20423718a7ea201c28fb83be7f94bbd6015bba3ec55615 not found: ID does not exist" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.268659 4895 scope.go:117] "RemoveContainer" containerID="0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d" Dec 06 09:22:02 crc kubenswrapper[4895]: E1206 09:22:02.269073 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d\": container with ID starting with 0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d not found: ID does not exist" containerID="0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.269100 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d"} err="failed to get container status \"0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d\": rpc error: code = NotFound desc = could not find container \"0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d\": container with ID starting with 0cfe5ac419d866155a0265456691a04087dfc75a25a2076080d0c4f595c04f4d not found: ID does not exist" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.269115 4895 scope.go:117] "RemoveContainer" containerID="8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8" Dec 06 09:22:02 crc kubenswrapper[4895]: E1206 09:22:02.269403 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8\": container with ID starting with 8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8 not found: ID does not exist" containerID="8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8" Dec 06 09:22:02 crc kubenswrapper[4895]: I1206 09:22:02.269438 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8"} err="failed to get container status \"8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8\": rpc error: code = NotFound desc = could not find container \"8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8\": container with ID starting with 8f898c6bf19f2127e0a63bdd37702faba0d9ed8952a5a469f4086ba500976ea8 not found: ID does not exist" Dec 06 09:22:04 crc kubenswrapper[4895]: I1206 09:22:04.067903 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" path="/var/lib/kubelet/pods/3716e58f-1d30-424b-92dc-aeca70455b54/volumes" Dec 06 09:22:19 crc kubenswrapper[4895]: I1206 09:22:19.862173 4895 scope.go:117] "RemoveContainer" containerID="858dc9a501d66b996412d4705025cf26dc8e4139ff22a267f7043ee04f2981e4" Dec 06 09:22:19 crc kubenswrapper[4895]: I1206 09:22:19.899332 4895 scope.go:117] "RemoveContainer" containerID="8faea71d98114bbacff04cbbdb98ed91e42014ef4ff0995d26210a3d61622f7b" Dec 06 09:22:19 crc kubenswrapper[4895]: I1206 09:22:19.975354 4895 scope.go:117] "RemoveContainer" containerID="8c8e93cd0d63ba25cade04f9c43799494720338467ee86832b55adfb0d0571ef" Dec 06 09:22:29 crc kubenswrapper[4895]: I1206 09:22:29.696209 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:22:29 crc kubenswrapper[4895]: I1206 09:22:29.696813 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:22:29 crc kubenswrapper[4895]: I1206 09:22:29.696854 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:22:29 crc kubenswrapper[4895]: I1206 09:22:29.697670 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:22:29 crc kubenswrapper[4895]: I1206 09:22:29.697723 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" gracePeriod=600 Dec 06 09:22:29 crc kubenswrapper[4895]: E1206 09:22:29.822905 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:22:30 crc kubenswrapper[4895]: I1206 09:22:30.499398 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" exitCode=0 Dec 06 09:22:30 crc kubenswrapper[4895]: I1206 09:22:30.499515 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f"} Dec 06 09:22:30 crc kubenswrapper[4895]: I1206 09:22:30.499769 4895 scope.go:117] "RemoveContainer" containerID="25a2f0f5587d8e1cb90d91095a50faeb855712cd694f679babd607bf56b409df" Dec 06 09:22:30 crc kubenswrapper[4895]: I1206 09:22:30.502978 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:22:30 crc kubenswrapper[4895]: E1206 09:22:30.504031 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:22:43 crc kubenswrapper[4895]: I1206 09:22:43.050568 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:22:43 crc kubenswrapper[4895]: E1206 09:22:43.051498 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:22:57 crc kubenswrapper[4895]: I1206 09:22:57.050600 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:22:57 crc kubenswrapper[4895]: E1206 09:22:57.052458 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:23:10 crc kubenswrapper[4895]: I1206 09:23:10.050985 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:23:10 crc kubenswrapper[4895]: E1206 09:23:10.051808 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:23:24 crc kubenswrapper[4895]: I1206 09:23:24.051666 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:23:24 crc kubenswrapper[4895]: E1206 09:23:24.052902 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:23:38 crc kubenswrapper[4895]: I1206 09:23:38.060836 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:23:38 crc kubenswrapper[4895]: E1206 09:23:38.061663 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:23:52 crc kubenswrapper[4895]: I1206 09:23:52.050906 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:23:52 crc kubenswrapper[4895]: E1206 09:23:52.051825 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:24:00 crc kubenswrapper[4895]: I1206 09:24:00.064691 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-wnxmc"] Dec 06 09:24:00 crc kubenswrapper[4895]: I1206 09:24:00.068849 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-7610-account-create-update-lg6rx"] Dec 06 09:24:00 crc kubenswrapper[4895]: I1206 09:24:00.079154 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-7610-account-create-update-lg6rx"] Dec 06 09:24:00 crc kubenswrapper[4895]: I1206 09:24:00.087889 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-wnxmc"] Dec 06 09:24:02 crc kubenswrapper[4895]: I1206 09:24:02.070755 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51aabd79-fd16-4e6d-b565-4815c6538cad" path="/var/lib/kubelet/pods/51aabd79-fd16-4e6d-b565-4815c6538cad/volumes" Dec 06 09:24:02 crc kubenswrapper[4895]: I1206 09:24:02.071729 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b0577e-9827-434b-9d5d-2ecba8296ac7" path="/var/lib/kubelet/pods/a3b0577e-9827-434b-9d5d-2ecba8296ac7/volumes" Dec 06 09:24:06 crc kubenswrapper[4895]: I1206 09:24:06.050804 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:24:06 crc kubenswrapper[4895]: E1206 09:24:06.051575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:24:13 crc kubenswrapper[4895]: I1206 09:24:13.043804 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pnqmr"] Dec 06 09:24:13 crc kubenswrapper[4895]: I1206 09:24:13.054778 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pnqmr"] Dec 06 09:24:14 crc kubenswrapper[4895]: I1206 09:24:14.064152 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd15f303-24c8-4075-b3c0-38a27527397f" path="/var/lib/kubelet/pods/bd15f303-24c8-4075-b3c0-38a27527397f/volumes" Dec 06 09:24:17 crc kubenswrapper[4895]: I1206 09:24:17.051292 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:24:17 crc kubenswrapper[4895]: E1206 09:24:17.051979 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:24:20 crc kubenswrapper[4895]: I1206 09:24:20.149700 4895 scope.go:117] "RemoveContainer" containerID="ff6ecae779a5c935f9fa65af26272e41a2521e758d952dddbb389b74110f13eb" Dec 06 09:24:20 crc kubenswrapper[4895]: I1206 09:24:20.184014 4895 scope.go:117] "RemoveContainer" containerID="de9a8f96313486a571565d42e67a3492ff87ef3cf137c54b3b8c9d9bd984325c" Dec 06 09:24:20 crc kubenswrapper[4895]: I1206 09:24:20.219374 4895 scope.go:117] "RemoveContainer" containerID="561f6e3e8c0e5e1aa622d74727a07ee1c4de8d2e4ae91632b5639f7bfca4976d" Dec 06 09:24:31 crc kubenswrapper[4895]: I1206 09:24:31.030718 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-bjkbd"] Dec 06 09:24:31 crc kubenswrapper[4895]: I1206 09:24:31.040946 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-3e90-account-create-update-wsvc8"] Dec 06 09:24:31 crc kubenswrapper[4895]: I1206 09:24:31.050394 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:24:31 crc kubenswrapper[4895]: E1206 09:24:31.050710 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:24:31 crc kubenswrapper[4895]: I1206 09:24:31.056884 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-bjkbd"] Dec 06 09:24:31 crc kubenswrapper[4895]: I1206 09:24:31.066253 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-3e90-account-create-update-wsvc8"] Dec 06 09:24:32 crc kubenswrapper[4895]: I1206 09:24:32.063439 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de09b98-9134-4897-b57e-f726979ac670" path="/var/lib/kubelet/pods/3de09b98-9134-4897-b57e-f726979ac670/volumes" Dec 06 09:24:32 crc kubenswrapper[4895]: I1206 09:24:32.065548 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613023d1-135a-4468-8886-71659c103c60" path="/var/lib/kubelet/pods/613023d1-135a-4468-8886-71659c103c60/volumes" Dec 06 09:24:43 crc kubenswrapper[4895]: I1206 09:24:43.050533 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:24:43 crc kubenswrapper[4895]: I1206 09:24:43.050879 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-9pczv"] Dec 06 09:24:43 crc kubenswrapper[4895]: E1206 09:24:43.051451 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:24:43 crc kubenswrapper[4895]: I1206 09:24:43.061460 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-9pczv"] Dec 06 09:24:44 crc kubenswrapper[4895]: I1206 09:24:44.082778 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bba439-5a49-4c4c-919b-d190d062fe1e" path="/var/lib/kubelet/pods/f2bba439-5a49-4c4c-919b-d190d062fe1e/volumes" Dec 06 09:24:51 crc kubenswrapper[4895]: I1206 09:24:51.902069 4895 generic.go:334] "Generic (PLEG): container finished" podID="710bdda9-c040-4731-b0cf-dce648cb6c9e" containerID="91a1a2d28f6084e8f6d67770b952675de730b747c57eed24ceb02be2bd4ae7e5" exitCode=0 Dec 06 09:24:51 crc kubenswrapper[4895]: I1206 09:24:51.902186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" event={"ID":"710bdda9-c040-4731-b0cf-dce648cb6c9e","Type":"ContainerDied","Data":"91a1a2d28f6084e8f6d67770b952675de730b747c57eed24ceb02be2bd4ae7e5"} Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.350502 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.542218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-tripleo-cleanup-combined-ca-bundle\") pod \"710bdda9-c040-4731-b0cf-dce648cb6c9e\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.542326 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-inventory\") pod \"710bdda9-c040-4731-b0cf-dce648cb6c9e\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.542378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8rml\" (UniqueName: \"kubernetes.io/projected/710bdda9-c040-4731-b0cf-dce648cb6c9e-kube-api-access-z8rml\") pod \"710bdda9-c040-4731-b0cf-dce648cb6c9e\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.542427 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ceph\") pod \"710bdda9-c040-4731-b0cf-dce648cb6c9e\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.542750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ssh-key\") pod \"710bdda9-c040-4731-b0cf-dce648cb6c9e\" (UID: \"710bdda9-c040-4731-b0cf-dce648cb6c9e\") " Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.548618 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ceph" (OuterVolumeSpecName: "ceph") pod "710bdda9-c040-4731-b0cf-dce648cb6c9e" (UID: "710bdda9-c040-4731-b0cf-dce648cb6c9e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.548682 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710bdda9-c040-4731-b0cf-dce648cb6c9e-kube-api-access-z8rml" (OuterVolumeSpecName: "kube-api-access-z8rml") pod "710bdda9-c040-4731-b0cf-dce648cb6c9e" (UID: "710bdda9-c040-4731-b0cf-dce648cb6c9e"). InnerVolumeSpecName "kube-api-access-z8rml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.553931 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "710bdda9-c040-4731-b0cf-dce648cb6c9e" (UID: "710bdda9-c040-4731-b0cf-dce648cb6c9e"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.589034 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "710bdda9-c040-4731-b0cf-dce648cb6c9e" (UID: "710bdda9-c040-4731-b0cf-dce648cb6c9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.601648 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-inventory" (OuterVolumeSpecName: "inventory") pod "710bdda9-c040-4731-b0cf-dce648cb6c9e" (UID: "710bdda9-c040-4731-b0cf-dce648cb6c9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.644984 4895 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.645017 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.645030 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8rml\" (UniqueName: \"kubernetes.io/projected/710bdda9-c040-4731-b0cf-dce648cb6c9e-kube-api-access-z8rml\") on node \"crc\" DevicePath \"\"" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.645039 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.645047 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710bdda9-c040-4731-b0cf-dce648cb6c9e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.928762 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" event={"ID":"710bdda9-c040-4731-b0cf-dce648cb6c9e","Type":"ContainerDied","Data":"1a394951e641f08193473cf243a276f61212e7f319ecd7561f8faf59ef04a73e"} Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.928798 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a394951e641f08193473cf243a276f61212e7f319ecd7561f8faf59ef04a73e" Dec 06 09:24:53 crc kubenswrapper[4895]: I1206 09:24:53.928848 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv" Dec 06 09:24:57 crc kubenswrapper[4895]: I1206 09:24:57.051053 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:24:57 crc kubenswrapper[4895]: E1206 09:24:57.051817 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.352378 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-rj7g6"] Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.352992 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="registry-server" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353013 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="registry-server" Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.353054 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="extract-utilities" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353065 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="extract-utilities" Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.353089 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="extract-utilities" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353100 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="extract-utilities" Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.353129 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710bdda9-c040-4731-b0cf-dce648cb6c9e" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353141 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="710bdda9-c040-4731-b0cf-dce648cb6c9e" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.353163 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="extract-content" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353174 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="extract-content" Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.353189 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="extract-content" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353200 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="extract-content" Dec 06 09:24:58 crc kubenswrapper[4895]: E1206 09:24:58.353248 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="registry-server" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353261 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="registry-server" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353599 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7adb0-4215-4921-92b5-dd08cf3ce7cf" containerName="registry-server" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353627 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="710bdda9-c040-4731-b0cf-dce648cb6c9e" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.353654 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3716e58f-1d30-424b-92dc-aeca70455b54" containerName="registry-server" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.355015 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.365858 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.366611 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.372800 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.386062 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-m45sx"] Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.388198 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.390119 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.392764 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.393460 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.427550 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-rj7g6"] Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.443234 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-m45sx"] Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492139 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-ssh-key\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492534 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-inventory\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492563 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs44p\" (UniqueName: \"kubernetes.io/projected/079ac743-75e7-470d-84b6-f5d38ee111f9-kube-api-access-gs44p\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492758 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ceph\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492858 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.492967 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshf8\" (UniqueName: \"kubernetes.io/projected/066e35d1-3c0e-481c-aa9b-40a41fd85835-kube-api-access-dshf8\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.493109 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.493226 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-inventory\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.597858 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-ssh-key\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.597951 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.597986 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-inventory\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.598005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs44p\" (UniqueName: \"kubernetes.io/projected/079ac743-75e7-470d-84b6-f5d38ee111f9-kube-api-access-gs44p\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.598249 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ceph\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.598263 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.598282 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dshf8\" (UniqueName: \"kubernetes.io/projected/066e35d1-3c0e-481c-aa9b-40a41fd85835-kube-api-access-dshf8\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.598309 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.598330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-inventory\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.620105 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-inventory\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.620733 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.621093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-ssh-key\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.621328 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.625166 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ceph\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.626292 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs44p\" (UniqueName: \"kubernetes.io/projected/079ac743-75e7-470d-84b6-f5d38ee111f9-kube-api-access-gs44p\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.627021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-m45sx\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.643122 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-inventory\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.643271 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dshf8\" (UniqueName: \"kubernetes.io/projected/066e35d1-3c0e-481c-aa9b-40a41fd85835-kube-api-access-dshf8\") pod \"bootstrap-openstack-openstack-cell1-rj7g6\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.689894 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:24:58 crc kubenswrapper[4895]: I1206 09:24:58.754382 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:24:59 crc kubenswrapper[4895]: I1206 09:24:59.302287 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-rj7g6"] Dec 06 09:24:59 crc kubenswrapper[4895]: I1206 09:24:59.403077 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:24:59 crc kubenswrapper[4895]: I1206 09:24:59.405069 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-m45sx"] Dec 06 09:24:59 crc kubenswrapper[4895]: W1206 09:24:59.424656 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod079ac743_75e7_470d_84b6_f5d38ee111f9.slice/crio-c1446945e8beea6c1d5dc79656c6f32948b84c66bbb741e2446cfef93dd93865 WatchSource:0}: Error finding container c1446945e8beea6c1d5dc79656c6f32948b84c66bbb741e2446cfef93dd93865: Status 404 returned error can't find the container with id c1446945e8beea6c1d5dc79656c6f32948b84c66bbb741e2446cfef93dd93865 Dec 06 09:24:59 crc kubenswrapper[4895]: I1206 09:24:59.984934 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" event={"ID":"079ac743-75e7-470d-84b6-f5d38ee111f9","Type":"ContainerStarted","Data":"c1446945e8beea6c1d5dc79656c6f32948b84c66bbb741e2446cfef93dd93865"} Dec 06 09:24:59 crc kubenswrapper[4895]: I1206 09:24:59.985962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" event={"ID":"066e35d1-3c0e-481c-aa9b-40a41fd85835","Type":"ContainerStarted","Data":"f7ad09476ef02dae93cd1cd8deb545a069965db5ac4470cb2145e9a4093de66e"} Dec 06 09:25:00 crc kubenswrapper[4895]: I1206 09:25:00.999353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" event={"ID":"079ac743-75e7-470d-84b6-f5d38ee111f9","Type":"ContainerStarted","Data":"7dafb429b1a44d2b95afab2e0fb19369872974dc2ba6a8e94974d5436dc4c52f"} Dec 06 09:25:01 crc kubenswrapper[4895]: I1206 09:25:01.001496 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" event={"ID":"066e35d1-3c0e-481c-aa9b-40a41fd85835","Type":"ContainerStarted","Data":"1449b3b0905d677a101214f490e0d844b703ed33026aa793a033c91db9c2b061"} Dec 06 09:25:01 crc kubenswrapper[4895]: I1206 09:25:01.027145 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" podStartSLOduration=2.59508221 podStartE2EDuration="3.027122104s" podCreationTimestamp="2025-12-06 09:24:58 +0000 UTC" firstStartedPulling="2025-12-06 09:24:59.429819485 +0000 UTC m=+8861.831208365" lastFinishedPulling="2025-12-06 09:24:59.861859389 +0000 UTC m=+8862.263248259" observedRunningTime="2025-12-06 09:25:01.02294658 +0000 UTC m=+8863.424335460" watchObservedRunningTime="2025-12-06 09:25:01.027122104 +0000 UTC m=+8863.428510974" Dec 06 09:25:01 crc kubenswrapper[4895]: I1206 09:25:01.042150 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" podStartSLOduration=2.609264338 podStartE2EDuration="3.042129644s" podCreationTimestamp="2025-12-06 09:24:58 +0000 UTC" firstStartedPulling="2025-12-06 09:24:59.402641173 +0000 UTC m=+8861.804030043" lastFinishedPulling="2025-12-06 09:24:59.835506479 +0000 UTC m=+8862.236895349" observedRunningTime="2025-12-06 09:25:01.041120956 +0000 UTC m=+8863.442509836" watchObservedRunningTime="2025-12-06 09:25:01.042129644 +0000 UTC m=+8863.443518514" Dec 06 09:25:11 crc kubenswrapper[4895]: I1206 09:25:11.050789 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:25:11 crc kubenswrapper[4895]: E1206 09:25:11.051975 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:25:20 crc kubenswrapper[4895]: I1206 09:25:20.335080 4895 scope.go:117] "RemoveContainer" containerID="9f8fe9a4209f618704bc615bc1a72ff886f2037aa438113b8c4afbb4f00fd73b" Dec 06 09:25:20 crc kubenswrapper[4895]: I1206 09:25:20.368175 4895 scope.go:117] "RemoveContainer" containerID="6604e7b5e8f3874be5f4b4dcb897b3208fc631da7abe023e674d36f2c6270ed8" Dec 06 09:25:20 crc kubenswrapper[4895]: I1206 09:25:20.457462 4895 scope.go:117] "RemoveContainer" containerID="151687d148543088c98ac569bd4bed6f23a765c7af43108b1b702526deab1724" Dec 06 09:25:25 crc kubenswrapper[4895]: I1206 09:25:25.050665 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:25:25 crc kubenswrapper[4895]: E1206 09:25:25.051577 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:25:39 crc kubenswrapper[4895]: I1206 09:25:39.059145 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:25:39 crc kubenswrapper[4895]: E1206 09:25:39.060015 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:25:54 crc kubenswrapper[4895]: I1206 09:25:54.050914 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:25:54 crc kubenswrapper[4895]: E1206 09:25:54.051745 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:26:09 crc kubenswrapper[4895]: I1206 09:26:09.051209 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:26:09 crc kubenswrapper[4895]: E1206 09:26:09.052434 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:26:21 crc kubenswrapper[4895]: I1206 09:26:21.051465 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:26:21 crc kubenswrapper[4895]: E1206 09:26:21.052299 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:26:34 crc kubenswrapper[4895]: I1206 09:26:34.057492 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:26:34 crc kubenswrapper[4895]: E1206 09:26:34.058288 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:26:49 crc kubenswrapper[4895]: I1206 09:26:49.051066 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:26:49 crc kubenswrapper[4895]: E1206 09:26:49.051827 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:27:00 crc kubenswrapper[4895]: I1206 09:27:00.050371 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:27:00 crc kubenswrapper[4895]: E1206 09:27:00.051216 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:27:15 crc kubenswrapper[4895]: I1206 09:27:15.051187 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:27:15 crc kubenswrapper[4895]: E1206 09:27:15.052024 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:27:26 crc kubenswrapper[4895]: I1206 09:27:26.979017 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wgwqx"] Dec 06 09:27:26 crc kubenswrapper[4895]: I1206 09:27:26.983181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:26 crc kubenswrapper[4895]: I1206 09:27:26.990493 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgwqx"] Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.137919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-catalog-content\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.138077 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-utilities\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.138144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jptcv\" (UniqueName: \"kubernetes.io/projected/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-kube-api-access-jptcv\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.240535 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-catalog-content\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.240691 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-utilities\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.240763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jptcv\" (UniqueName: \"kubernetes.io/projected/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-kube-api-access-jptcv\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.241206 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-catalog-content\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.241248 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-utilities\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.295723 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jptcv\" (UniqueName: \"kubernetes.io/projected/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-kube-api-access-jptcv\") pod \"certified-operators-wgwqx\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.307745 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:27 crc kubenswrapper[4895]: I1206 09:27:27.888232 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgwqx"] Dec 06 09:27:27 crc kubenswrapper[4895]: W1206 09:27:27.896393 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76ab4772_71ef_43c7_aebb_2c5d88fd0b89.slice/crio-3289155267a45192a91319193d35d5973243813740b3feb86e0e72e96c5799f9 WatchSource:0}: Error finding container 3289155267a45192a91319193d35d5973243813740b3feb86e0e72e96c5799f9: Status 404 returned error can't find the container with id 3289155267a45192a91319193d35d5973243813740b3feb86e0e72e96c5799f9 Dec 06 09:27:28 crc kubenswrapper[4895]: I1206 09:27:28.057381 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:27:28 crc kubenswrapper[4895]: E1206 09:27:28.057619 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:27:28 crc kubenswrapper[4895]: I1206 09:27:28.720969 4895 generic.go:334] "Generic (PLEG): container finished" podID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerID="bfce39a929e074b324de6c3187ac21943ac47a6ca46a2c7450ebdbedfe34cfa1" exitCode=0 Dec 06 09:27:28 crc kubenswrapper[4895]: I1206 09:27:28.721041 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerDied","Data":"bfce39a929e074b324de6c3187ac21943ac47a6ca46a2c7450ebdbedfe34cfa1"} Dec 06 09:27:28 crc kubenswrapper[4895]: I1206 09:27:28.721308 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerStarted","Data":"3289155267a45192a91319193d35d5973243813740b3feb86e0e72e96c5799f9"} Dec 06 09:27:29 crc kubenswrapper[4895]: I1206 09:27:29.735320 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerStarted","Data":"32c7f847d4c2d51c09800554ad1b53df83883fca3cefea2670209fdec74d1cec"} Dec 06 09:27:30 crc kubenswrapper[4895]: I1206 09:27:30.746381 4895 generic.go:334] "Generic (PLEG): container finished" podID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerID="32c7f847d4c2d51c09800554ad1b53df83883fca3cefea2670209fdec74d1cec" exitCode=0 Dec 06 09:27:30 crc kubenswrapper[4895]: I1206 09:27:30.746759 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerDied","Data":"32c7f847d4c2d51c09800554ad1b53df83883fca3cefea2670209fdec74d1cec"} Dec 06 09:27:31 crc kubenswrapper[4895]: I1206 09:27:31.766799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerStarted","Data":"cffe298a73c2f6b5595d43019d40673b8a3412a2c3f44a2bf5767e271a542586"} Dec 06 09:27:31 crc kubenswrapper[4895]: I1206 09:27:31.788017 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wgwqx" podStartSLOduration=3.3430638520000002 podStartE2EDuration="5.787986136s" podCreationTimestamp="2025-12-06 09:27:26 +0000 UTC" firstStartedPulling="2025-12-06 09:27:28.723749162 +0000 UTC m=+9011.125138032" lastFinishedPulling="2025-12-06 09:27:31.168671446 +0000 UTC m=+9013.570060316" observedRunningTime="2025-12-06 09:27:31.78703942 +0000 UTC m=+9014.188428290" watchObservedRunningTime="2025-12-06 09:27:31.787986136 +0000 UTC m=+9014.189375006" Dec 06 09:27:37 crc kubenswrapper[4895]: I1206 09:27:37.308807 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:37 crc kubenswrapper[4895]: I1206 09:27:37.309413 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:37 crc kubenswrapper[4895]: I1206 09:27:37.378903 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:37 crc kubenswrapper[4895]: I1206 09:27:37.874944 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:37 crc kubenswrapper[4895]: I1206 09:27:37.924956 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgwqx"] Dec 06 09:27:39 crc kubenswrapper[4895]: I1206 09:27:39.839318 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wgwqx" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="registry-server" containerID="cri-o://cffe298a73c2f6b5595d43019d40673b8a3412a2c3f44a2bf5767e271a542586" gracePeriod=2 Dec 06 09:27:40 crc kubenswrapper[4895]: I1206 09:27:40.051600 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:27:40 crc kubenswrapper[4895]: I1206 09:27:40.850740 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"ffdc2a939a93f933962849d90f631256edf91007e25ce39191e7dad4620ed7f2"} Dec 06 09:27:40 crc kubenswrapper[4895]: I1206 09:27:40.854577 4895 generic.go:334] "Generic (PLEG): container finished" podID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerID="cffe298a73c2f6b5595d43019d40673b8a3412a2c3f44a2bf5767e271a542586" exitCode=0 Dec 06 09:27:40 crc kubenswrapper[4895]: I1206 09:27:40.854620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerDied","Data":"cffe298a73c2f6b5595d43019d40673b8a3412a2c3f44a2bf5767e271a542586"} Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.549891 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.654313 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-catalog-content\") pod \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.654450 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-utilities\") pod \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.654621 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jptcv\" (UniqueName: \"kubernetes.io/projected/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-kube-api-access-jptcv\") pod \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\" (UID: \"76ab4772-71ef-43c7-aebb-2c5d88fd0b89\") " Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.655251 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-utilities" (OuterVolumeSpecName: "utilities") pod "76ab4772-71ef-43c7-aebb-2c5d88fd0b89" (UID: "76ab4772-71ef-43c7-aebb-2c5d88fd0b89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.662018 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-kube-api-access-jptcv" (OuterVolumeSpecName: "kube-api-access-jptcv") pod "76ab4772-71ef-43c7-aebb-2c5d88fd0b89" (UID: "76ab4772-71ef-43c7-aebb-2c5d88fd0b89"). InnerVolumeSpecName "kube-api-access-jptcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.703177 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76ab4772-71ef-43c7-aebb-2c5d88fd0b89" (UID: "76ab4772-71ef-43c7-aebb-2c5d88fd0b89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.757145 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jptcv\" (UniqueName: \"kubernetes.io/projected/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-kube-api-access-jptcv\") on node \"crc\" DevicePath \"\"" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.757188 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.757200 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76ab4772-71ef-43c7-aebb-2c5d88fd0b89-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.867268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgwqx" event={"ID":"76ab4772-71ef-43c7-aebb-2c5d88fd0b89","Type":"ContainerDied","Data":"3289155267a45192a91319193d35d5973243813740b3feb86e0e72e96c5799f9"} Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.867565 4895 scope.go:117] "RemoveContainer" containerID="cffe298a73c2f6b5595d43019d40673b8a3412a2c3f44a2bf5767e271a542586" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.867812 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgwqx" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.903792 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgwqx"] Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.910089 4895 scope.go:117] "RemoveContainer" containerID="32c7f847d4c2d51c09800554ad1b53df83883fca3cefea2670209fdec74d1cec" Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.914504 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wgwqx"] Dec 06 09:27:41 crc kubenswrapper[4895]: I1206 09:27:41.929796 4895 scope.go:117] "RemoveContainer" containerID="bfce39a929e074b324de6c3187ac21943ac47a6ca46a2c7450ebdbedfe34cfa1" Dec 06 09:27:42 crc kubenswrapper[4895]: I1206 09:27:42.063995 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" path="/var/lib/kubelet/pods/76ab4772-71ef-43c7-aebb-2c5d88fd0b89/volumes" Dec 06 09:28:08 crc kubenswrapper[4895]: I1206 09:28:08.120237 4895 generic.go:334] "Generic (PLEG): container finished" podID="066e35d1-3c0e-481c-aa9b-40a41fd85835" containerID="1449b3b0905d677a101214f490e0d844b703ed33026aa793a033c91db9c2b061" exitCode=0 Dec 06 09:28:08 crc kubenswrapper[4895]: I1206 09:28:08.120357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" event={"ID":"066e35d1-3c0e-481c-aa9b-40a41fd85835","Type":"ContainerDied","Data":"1449b3b0905d677a101214f490e0d844b703ed33026aa793a033c91db9c2b061"} Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.689690 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.748291 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-bootstrap-combined-ca-bundle\") pod \"066e35d1-3c0e-481c-aa9b-40a41fd85835\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.748456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ceph\") pod \"066e35d1-3c0e-481c-aa9b-40a41fd85835\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.748597 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dshf8\" (UniqueName: \"kubernetes.io/projected/066e35d1-3c0e-481c-aa9b-40a41fd85835-kube-api-access-dshf8\") pod \"066e35d1-3c0e-481c-aa9b-40a41fd85835\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.748677 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-inventory\") pod \"066e35d1-3c0e-481c-aa9b-40a41fd85835\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.748810 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ssh-key\") pod \"066e35d1-3c0e-481c-aa9b-40a41fd85835\" (UID: \"066e35d1-3c0e-481c-aa9b-40a41fd85835\") " Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.759766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "066e35d1-3c0e-481c-aa9b-40a41fd85835" (UID: "066e35d1-3c0e-481c-aa9b-40a41fd85835"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.760024 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066e35d1-3c0e-481c-aa9b-40a41fd85835-kube-api-access-dshf8" (OuterVolumeSpecName: "kube-api-access-dshf8") pod "066e35d1-3c0e-481c-aa9b-40a41fd85835" (UID: "066e35d1-3c0e-481c-aa9b-40a41fd85835"). InnerVolumeSpecName "kube-api-access-dshf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.771271 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ceph" (OuterVolumeSpecName: "ceph") pod "066e35d1-3c0e-481c-aa9b-40a41fd85835" (UID: "066e35d1-3c0e-481c-aa9b-40a41fd85835"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.788700 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "066e35d1-3c0e-481c-aa9b-40a41fd85835" (UID: "066e35d1-3c0e-481c-aa9b-40a41fd85835"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.799386 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-inventory" (OuterVolumeSpecName: "inventory") pod "066e35d1-3c0e-481c-aa9b-40a41fd85835" (UID: "066e35d1-3c0e-481c-aa9b-40a41fd85835"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.851073 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.851116 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dshf8\" (UniqueName: \"kubernetes.io/projected/066e35d1-3c0e-481c-aa9b-40a41fd85835-kube-api-access-dshf8\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.851128 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.851136 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:09 crc kubenswrapper[4895]: I1206 09:28:09.851145 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066e35d1-3c0e-481c-aa9b-40a41fd85835-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.140518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" event={"ID":"066e35d1-3c0e-481c-aa9b-40a41fd85835","Type":"ContainerDied","Data":"f7ad09476ef02dae93cd1cd8deb545a069965db5ac4470cb2145e9a4093de66e"} Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.140856 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7ad09476ef02dae93cd1cd8deb545a069965db5ac4470cb2145e9a4093de66e" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.140580 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rj7g6" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.142584 4895 generic.go:334] "Generic (PLEG): container finished" podID="079ac743-75e7-470d-84b6-f5d38ee111f9" containerID="7dafb429b1a44d2b95afab2e0fb19369872974dc2ba6a8e94974d5436dc4c52f" exitCode=0 Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.142619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" event={"ID":"079ac743-75e7-470d-84b6-f5d38ee111f9","Type":"ContainerDied","Data":"7dafb429b1a44d2b95afab2e0fb19369872974dc2ba6a8e94974d5436dc4c52f"} Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.248533 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-gwgt2"] Dec 06 09:28:10 crc kubenswrapper[4895]: E1206 09:28:10.248941 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066e35d1-3c0e-481c-aa9b-40a41fd85835" containerName="bootstrap-openstack-openstack-cell1" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.248971 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="066e35d1-3c0e-481c-aa9b-40a41fd85835" containerName="bootstrap-openstack-openstack-cell1" Dec 06 09:28:10 crc kubenswrapper[4895]: E1206 09:28:10.248996 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="extract-content" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.249002 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="extract-content" Dec 06 09:28:10 crc kubenswrapper[4895]: E1206 09:28:10.249021 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="extract-utilities" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.249030 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="extract-utilities" Dec 06 09:28:10 crc kubenswrapper[4895]: E1206 09:28:10.249040 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="registry-server" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.249045 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="registry-server" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.249237 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="066e35d1-3c0e-481c-aa9b-40a41fd85835" containerName="bootstrap-openstack-openstack-cell1" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.249258 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ab4772-71ef-43c7-aebb-2c5d88fd0b89" containerName="registry-server" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.249996 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.252004 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.252187 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.265330 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-gwgt2"] Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.359830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-inventory\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.359889 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ceph\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.359926 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzmqr\" (UniqueName: \"kubernetes.io/projected/d5f603c7-e31a-4a5c-b029-986666a34609-kube-api-access-hzmqr\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.360004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ssh-key\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.462570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-inventory\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.462627 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ceph\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.462665 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzmqr\" (UniqueName: \"kubernetes.io/projected/d5f603c7-e31a-4a5c-b029-986666a34609-kube-api-access-hzmqr\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.462716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ssh-key\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.471242 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-inventory\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.471294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ceph\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.481775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ssh-key\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.483240 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzmqr\" (UniqueName: \"kubernetes.io/projected/d5f603c7-e31a-4a5c-b029-986666a34609-kube-api-access-hzmqr\") pod \"download-cache-openstack-openstack-cell1-gwgt2\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:10 crc kubenswrapper[4895]: I1206 09:28:10.570043 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.091406 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-gwgt2"] Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.152158 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" event={"ID":"d5f603c7-e31a-4a5c-b029-986666a34609","Type":"ContainerStarted","Data":"032372a436508e22e7f8ab37fd95b2c4f8120fb6901b676c30d73e5a485cc8c1"} Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.565956 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.621271 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs44p\" (UniqueName: \"kubernetes.io/projected/079ac743-75e7-470d-84b6-f5d38ee111f9-kube-api-access-gs44p\") pod \"079ac743-75e7-470d-84b6-f5d38ee111f9\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.621359 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-bootstrap-combined-ca-bundle\") pod \"079ac743-75e7-470d-84b6-f5d38ee111f9\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.621489 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-inventory\") pod \"079ac743-75e7-470d-84b6-f5d38ee111f9\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.621560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-ssh-key\") pod \"079ac743-75e7-470d-84b6-f5d38ee111f9\" (UID: \"079ac743-75e7-470d-84b6-f5d38ee111f9\") " Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.629245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "079ac743-75e7-470d-84b6-f5d38ee111f9" (UID: "079ac743-75e7-470d-84b6-f5d38ee111f9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.629399 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079ac743-75e7-470d-84b6-f5d38ee111f9-kube-api-access-gs44p" (OuterVolumeSpecName: "kube-api-access-gs44p") pod "079ac743-75e7-470d-84b6-f5d38ee111f9" (UID: "079ac743-75e7-470d-84b6-f5d38ee111f9"). InnerVolumeSpecName "kube-api-access-gs44p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.654663 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "079ac743-75e7-470d-84b6-f5d38ee111f9" (UID: "079ac743-75e7-470d-84b6-f5d38ee111f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.676881 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-inventory" (OuterVolumeSpecName: "inventory") pod "079ac743-75e7-470d-84b6-f5d38ee111f9" (UID: "079ac743-75e7-470d-84b6-f5d38ee111f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.724163 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs44p\" (UniqueName: \"kubernetes.io/projected/079ac743-75e7-470d-84b6-f5d38ee111f9-kube-api-access-gs44p\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.724205 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.724219 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:11 crc kubenswrapper[4895]: I1206 09:28:11.724233 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/079ac743-75e7-470d-84b6-f5d38ee111f9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.176835 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" event={"ID":"d5f603c7-e31a-4a5c-b029-986666a34609","Type":"ContainerStarted","Data":"b2fdd35cc924e45eda593253cdc410ed41bec14921ee7bc01c72a54ccd68c3d6"} Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.180302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" event={"ID":"079ac743-75e7-470d-84b6-f5d38ee111f9","Type":"ContainerDied","Data":"c1446945e8beea6c1d5dc79656c6f32948b84c66bbb741e2446cfef93dd93865"} Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.180352 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1446945e8beea6c1d5dc79656c6f32948b84c66bbb741e2446cfef93dd93865" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.180536 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-m45sx" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.228759 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" podStartSLOduration=1.548340294 podStartE2EDuration="2.228735632s" podCreationTimestamp="2025-12-06 09:28:10 +0000 UTC" firstStartedPulling="2025-12-06 09:28:11.094042212 +0000 UTC m=+9053.495431082" lastFinishedPulling="2025-12-06 09:28:11.77443755 +0000 UTC m=+9054.175826420" observedRunningTime="2025-12-06 09:28:12.213754112 +0000 UTC m=+9054.615143022" watchObservedRunningTime="2025-12-06 09:28:12.228735632 +0000 UTC m=+9054.630124512" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.266541 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-networker-gt54m"] Dec 06 09:28:12 crc kubenswrapper[4895]: E1206 09:28:12.267086 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079ac743-75e7-470d-84b6-f5d38ee111f9" containerName="bootstrap-openstack-openstack-networker" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.267111 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="079ac743-75e7-470d-84b6-f5d38ee111f9" containerName="bootstrap-openstack-openstack-networker" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.267377 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="079ac743-75e7-470d-84b6-f5d38ee111f9" containerName="bootstrap-openstack-openstack-networker" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.268358 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.273323 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.273719 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.290316 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-networker-gt54m"] Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.337909 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-ssh-key\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.337990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-inventory\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.338036 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns89r\" (UniqueName: \"kubernetes.io/projected/2f00456a-cbda-45c8-a825-4f449b138336-kube-api-access-ns89r\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.439599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns89r\" (UniqueName: \"kubernetes.io/projected/2f00456a-cbda-45c8-a825-4f449b138336-kube-api-access-ns89r\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.439830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-ssh-key\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.439886 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-inventory\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.444316 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-ssh-key\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.445744 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-inventory\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.464627 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns89r\" (UniqueName: \"kubernetes.io/projected/2f00456a-cbda-45c8-a825-4f449b138336-kube-api-access-ns89r\") pod \"download-cache-openstack-openstack-networker-gt54m\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:12 crc kubenswrapper[4895]: I1206 09:28:12.590837 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:28:13 crc kubenswrapper[4895]: I1206 09:28:13.148740 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-networker-gt54m"] Dec 06 09:28:13 crc kubenswrapper[4895]: W1206 09:28:13.156663 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f00456a_cbda_45c8_a825_4f449b138336.slice/crio-8764aa521173698d1e9cf14e1fad81d5a44a0d0fe2875ae60a94f17a2683db31 WatchSource:0}: Error finding container 8764aa521173698d1e9cf14e1fad81d5a44a0d0fe2875ae60a94f17a2683db31: Status 404 returned error can't find the container with id 8764aa521173698d1e9cf14e1fad81d5a44a0d0fe2875ae60a94f17a2683db31 Dec 06 09:28:13 crc kubenswrapper[4895]: I1206 09:28:13.191155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-gt54m" event={"ID":"2f00456a-cbda-45c8-a825-4f449b138336","Type":"ContainerStarted","Data":"8764aa521173698d1e9cf14e1fad81d5a44a0d0fe2875ae60a94f17a2683db31"} Dec 06 09:28:14 crc kubenswrapper[4895]: I1206 09:28:14.201898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-gt54m" event={"ID":"2f00456a-cbda-45c8-a825-4f449b138336","Type":"ContainerStarted","Data":"8f4a61e714f4cba5a98538cd8346e865cda11cf33d48695fdf7ca95491f3e84e"} Dec 06 09:28:14 crc kubenswrapper[4895]: I1206 09:28:14.219017 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-networker-gt54m" podStartSLOduration=1.709163266 podStartE2EDuration="2.219000375s" podCreationTimestamp="2025-12-06 09:28:12 +0000 UTC" firstStartedPulling="2025-12-06 09:28:13.158840192 +0000 UTC m=+9055.560229062" lastFinishedPulling="2025-12-06 09:28:13.668677301 +0000 UTC m=+9056.070066171" observedRunningTime="2025-12-06 09:28:14.218251455 +0000 UTC m=+9056.619640335" watchObservedRunningTime="2025-12-06 09:28:14.219000375 +0000 UTC m=+9056.620389255" Dec 06 09:29:24 crc kubenswrapper[4895]: I1206 09:29:24.926450 4895 generic.go:334] "Generic (PLEG): container finished" podID="2f00456a-cbda-45c8-a825-4f449b138336" containerID="8f4a61e714f4cba5a98538cd8346e865cda11cf33d48695fdf7ca95491f3e84e" exitCode=0 Dec 06 09:29:24 crc kubenswrapper[4895]: I1206 09:29:24.927082 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-gt54m" event={"ID":"2f00456a-cbda-45c8-a825-4f449b138336","Type":"ContainerDied","Data":"8f4a61e714f4cba5a98538cd8346e865cda11cf33d48695fdf7ca95491f3e84e"} Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.568659 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.646554 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns89r\" (UniqueName: \"kubernetes.io/projected/2f00456a-cbda-45c8-a825-4f449b138336-kube-api-access-ns89r\") pod \"2f00456a-cbda-45c8-a825-4f449b138336\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.646746 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-inventory\") pod \"2f00456a-cbda-45c8-a825-4f449b138336\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.646854 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-ssh-key\") pod \"2f00456a-cbda-45c8-a825-4f449b138336\" (UID: \"2f00456a-cbda-45c8-a825-4f449b138336\") " Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.652641 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f00456a-cbda-45c8-a825-4f449b138336-kube-api-access-ns89r" (OuterVolumeSpecName: "kube-api-access-ns89r") pod "2f00456a-cbda-45c8-a825-4f449b138336" (UID: "2f00456a-cbda-45c8-a825-4f449b138336"). InnerVolumeSpecName "kube-api-access-ns89r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.677143 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f00456a-cbda-45c8-a825-4f449b138336" (UID: "2f00456a-cbda-45c8-a825-4f449b138336"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.679732 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-inventory" (OuterVolumeSpecName: "inventory") pod "2f00456a-cbda-45c8-a825-4f449b138336" (UID: "2f00456a-cbda-45c8-a825-4f449b138336"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.756284 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.756324 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f00456a-cbda-45c8-a825-4f449b138336-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.756365 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns89r\" (UniqueName: \"kubernetes.io/projected/2f00456a-cbda-45c8-a825-4f449b138336-kube-api-access-ns89r\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.950025 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-gt54m" event={"ID":"2f00456a-cbda-45c8-a825-4f449b138336","Type":"ContainerDied","Data":"8764aa521173698d1e9cf14e1fad81d5a44a0d0fe2875ae60a94f17a2683db31"} Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.950077 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8764aa521173698d1e9cf14e1fad81d5a44a0d0fe2875ae60a94f17a2683db31" Dec 06 09:29:26 crc kubenswrapper[4895]: I1206 09:29:26.950078 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-gt54m" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.050291 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-networker-7d5jm"] Dec 06 09:29:27 crc kubenswrapper[4895]: E1206 09:29:27.052379 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f00456a-cbda-45c8-a825-4f449b138336" containerName="download-cache-openstack-openstack-networker" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.052409 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f00456a-cbda-45c8-a825-4f449b138336" containerName="download-cache-openstack-openstack-networker" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.052711 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f00456a-cbda-45c8-a825-4f449b138336" containerName="download-cache-openstack-openstack-networker" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.053680 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.056407 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.056698 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.060954 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-networker-7d5jm"] Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.163929 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/cfc5ef76-e849-4740-93de-d9490d688654-kube-api-access-lq9hc\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.164030 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-ssh-key\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.164535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-inventory\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.267017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-inventory\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.267323 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/cfc5ef76-e849-4740-93de-d9490d688654-kube-api-access-lq9hc\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.267445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-ssh-key\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.270405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-ssh-key\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.270460 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-inventory\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.284438 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/cfc5ef76-e849-4740-93de-d9490d688654-kube-api-access-lq9hc\") pod \"configure-network-openstack-openstack-networker-7d5jm\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.414197 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:29:27 crc kubenswrapper[4895]: I1206 09:29:27.967364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-networker-7d5jm"] Dec 06 09:29:28 crc kubenswrapper[4895]: I1206 09:29:28.969397 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" event={"ID":"cfc5ef76-e849-4740-93de-d9490d688654","Type":"ContainerStarted","Data":"c84c23104657881d75a4da073e53d2bd0b5b73d86c201c60877da45eac2a248f"} Dec 06 09:29:28 crc kubenswrapper[4895]: I1206 09:29:28.969850 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" event={"ID":"cfc5ef76-e849-4740-93de-d9490d688654","Type":"ContainerStarted","Data":"c74d2ce752afe090c4b3df75772a8967dc32c1e03de7424bfbe305707188f1c6"} Dec 06 09:29:46 crc kubenswrapper[4895]: I1206 09:29:46.166964 4895 generic.go:334] "Generic (PLEG): container finished" podID="d5f603c7-e31a-4a5c-b029-986666a34609" containerID="b2fdd35cc924e45eda593253cdc410ed41bec14921ee7bc01c72a54ccd68c3d6" exitCode=0 Dec 06 09:29:46 crc kubenswrapper[4895]: I1206 09:29:46.167065 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" event={"ID":"d5f603c7-e31a-4a5c-b029-986666a34609","Type":"ContainerDied","Data":"b2fdd35cc924e45eda593253cdc410ed41bec14921ee7bc01c72a54ccd68c3d6"} Dec 06 09:29:46 crc kubenswrapper[4895]: I1206 09:29:46.193171 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" podStartSLOduration=18.778461636 podStartE2EDuration="19.193144865s" podCreationTimestamp="2025-12-06 09:29:27 +0000 UTC" firstStartedPulling="2025-12-06 09:29:27.978890744 +0000 UTC m=+9130.380279614" lastFinishedPulling="2025-12-06 09:29:28.393573973 +0000 UTC m=+9130.794962843" observedRunningTime="2025-12-06 09:29:28.987409818 +0000 UTC m=+9131.388798698" watchObservedRunningTime="2025-12-06 09:29:46.193144865 +0000 UTC m=+9148.594533735" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.653119 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.729790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ssh-key\") pod \"d5f603c7-e31a-4a5c-b029-986666a34609\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.729855 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-inventory\") pod \"d5f603c7-e31a-4a5c-b029-986666a34609\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.730060 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ceph\") pod \"d5f603c7-e31a-4a5c-b029-986666a34609\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.730111 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzmqr\" (UniqueName: \"kubernetes.io/projected/d5f603c7-e31a-4a5c-b029-986666a34609-kube-api-access-hzmqr\") pod \"d5f603c7-e31a-4a5c-b029-986666a34609\" (UID: \"d5f603c7-e31a-4a5c-b029-986666a34609\") " Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.734730 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ceph" (OuterVolumeSpecName: "ceph") pod "d5f603c7-e31a-4a5c-b029-986666a34609" (UID: "d5f603c7-e31a-4a5c-b029-986666a34609"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.735606 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f603c7-e31a-4a5c-b029-986666a34609-kube-api-access-hzmqr" (OuterVolumeSpecName: "kube-api-access-hzmqr") pod "d5f603c7-e31a-4a5c-b029-986666a34609" (UID: "d5f603c7-e31a-4a5c-b029-986666a34609"). InnerVolumeSpecName "kube-api-access-hzmqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.757052 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-inventory" (OuterVolumeSpecName: "inventory") pod "d5f603c7-e31a-4a5c-b029-986666a34609" (UID: "d5f603c7-e31a-4a5c-b029-986666a34609"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.764040 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5f603c7-e31a-4a5c-b029-986666a34609" (UID: "d5f603c7-e31a-4a5c-b029-986666a34609"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.832884 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.832922 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzmqr\" (UniqueName: \"kubernetes.io/projected/d5f603c7-e31a-4a5c-b029-986666a34609-kube-api-access-hzmqr\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.832937 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:47 crc kubenswrapper[4895]: I1206 09:29:47.832952 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f603c7-e31a-4a5c-b029-986666a34609-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.186916 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" event={"ID":"d5f603c7-e31a-4a5c-b029-986666a34609","Type":"ContainerDied","Data":"032372a436508e22e7f8ab37fd95b2c4f8120fb6901b676c30d73e5a485cc8c1"} Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.186959 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032372a436508e22e7f8ab37fd95b2c4f8120fb6901b676c30d73e5a485cc8c1" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.186989 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-gwgt2" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.272226 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-tmht6"] Dec 06 09:29:48 crc kubenswrapper[4895]: E1206 09:29:48.274008 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f603c7-e31a-4a5c-b029-986666a34609" containerName="download-cache-openstack-openstack-cell1" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.274032 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f603c7-e31a-4a5c-b029-986666a34609" containerName="download-cache-openstack-openstack-cell1" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.275559 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f603c7-e31a-4a5c-b029-986666a34609" containerName="download-cache-openstack-openstack-cell1" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.276918 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.286537 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.287254 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.357015 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-inventory\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.357098 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjdn8\" (UniqueName: \"kubernetes.io/projected/46f96e8a-cddb-4908-8ced-54dd4fcb7731-kube-api-access-tjdn8\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.357201 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ceph\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.357223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ssh-key\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.363625 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-tmht6"] Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.460663 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-inventory\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.460738 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjdn8\" (UniqueName: \"kubernetes.io/projected/46f96e8a-cddb-4908-8ced-54dd4fcb7731-kube-api-access-tjdn8\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.460826 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ceph\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.460852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ssh-key\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.476242 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ceph\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.494245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-inventory\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.498083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ssh-key\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.502285 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjdn8\" (UniqueName: \"kubernetes.io/projected/46f96e8a-cddb-4908-8ced-54dd4fcb7731-kube-api-access-tjdn8\") pod \"configure-network-openstack-openstack-cell1-tmht6\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:48 crc kubenswrapper[4895]: I1206 09:29:48.595589 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:29:49 crc kubenswrapper[4895]: I1206 09:29:49.147608 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-tmht6"] Dec 06 09:29:49 crc kubenswrapper[4895]: I1206 09:29:49.197202 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" event={"ID":"46f96e8a-cddb-4908-8ced-54dd4fcb7731","Type":"ContainerStarted","Data":"b454878299ffcc9ebdbbd883ad6586d1d70f13ab6342a98bfec16bea7a390765"} Dec 06 09:29:50 crc kubenswrapper[4895]: I1206 09:29:50.208284 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" event={"ID":"46f96e8a-cddb-4908-8ced-54dd4fcb7731","Type":"ContainerStarted","Data":"a907e3d1cad8e8da957fb98efd596bf16685f83d33dcb42979ceb526f1d45d97"} Dec 06 09:29:50 crc kubenswrapper[4895]: I1206 09:29:50.234750 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" podStartSLOduration=1.758819699 podStartE2EDuration="2.234731281s" podCreationTimestamp="2025-12-06 09:29:48 +0000 UTC" firstStartedPulling="2025-12-06 09:29:49.150986633 +0000 UTC m=+9151.552375503" lastFinishedPulling="2025-12-06 09:29:49.626898175 +0000 UTC m=+9152.028287085" observedRunningTime="2025-12-06 09:29:50.225819967 +0000 UTC m=+9152.627208837" watchObservedRunningTime="2025-12-06 09:29:50.234731281 +0000 UTC m=+9152.636120151" Dec 06 09:29:54 crc kubenswrapper[4895]: I1206 09:29:54.959368 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k884m"] Dec 06 09:29:54 crc kubenswrapper[4895]: I1206 09:29:54.962434 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:54 crc kubenswrapper[4895]: I1206 09:29:54.977059 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k884m"] Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.094704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvppw\" (UniqueName: \"kubernetes.io/projected/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-kube-api-access-bvppw\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.094768 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-catalog-content\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.094815 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-utilities\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.196843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-utilities\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.197044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvppw\" (UniqueName: \"kubernetes.io/projected/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-kube-api-access-bvppw\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.197071 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-catalog-content\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.197588 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-utilities\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.197687 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-catalog-content\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.220719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvppw\" (UniqueName: \"kubernetes.io/projected/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-kube-api-access-bvppw\") pod \"redhat-operators-k884m\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:55 crc kubenswrapper[4895]: I1206 09:29:55.289704 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:29:56 crc kubenswrapper[4895]: I1206 09:29:56.273288 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k884m"] Dec 06 09:29:57 crc kubenswrapper[4895]: I1206 09:29:57.280761 4895 generic.go:334] "Generic (PLEG): container finished" podID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerID="52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6" exitCode=0 Dec 06 09:29:57 crc kubenswrapper[4895]: I1206 09:29:57.280854 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerDied","Data":"52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6"} Dec 06 09:29:57 crc kubenswrapper[4895]: I1206 09:29:57.281110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerStarted","Data":"b7405d67e55789a5162234204909963a3b59da3a41447be8f8b4236f6cd47778"} Dec 06 09:29:58 crc kubenswrapper[4895]: I1206 09:29:58.296520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerStarted","Data":"b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639"} Dec 06 09:29:59 crc kubenswrapper[4895]: I1206 09:29:59.695747 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:29:59 crc kubenswrapper[4895]: I1206 09:29:59.696053 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.155828 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w"] Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.157846 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.160905 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.161285 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.166344 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w"] Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.204402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlj5s\" (UniqueName: \"kubernetes.io/projected/3de51898-9fcb-4640-8e5b-710a2d1588e5-kube-api-access-tlj5s\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.205452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3de51898-9fcb-4640-8e5b-710a2d1588e5-secret-volume\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.205577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3de51898-9fcb-4640-8e5b-710a2d1588e5-config-volume\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.307203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3de51898-9fcb-4640-8e5b-710a2d1588e5-secret-volume\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.307258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3de51898-9fcb-4640-8e5b-710a2d1588e5-config-volume\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.307318 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlj5s\" (UniqueName: \"kubernetes.io/projected/3de51898-9fcb-4640-8e5b-710a2d1588e5-kube-api-access-tlj5s\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.308968 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3de51898-9fcb-4640-8e5b-710a2d1588e5-config-volume\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.734860 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3de51898-9fcb-4640-8e5b-710a2d1588e5-secret-volume\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.734933 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlj5s\" (UniqueName: \"kubernetes.io/projected/3de51898-9fcb-4640-8e5b-710a2d1588e5-kube-api-access-tlj5s\") pod \"collect-profiles-29416890-qfr9w\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:00 crc kubenswrapper[4895]: I1206 09:30:00.782573 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:01 crc kubenswrapper[4895]: W1206 09:30:01.630677 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3de51898_9fcb_4640_8e5b_710a2d1588e5.slice/crio-3fdaf5035e4d992e86d127ee8a8c1ef7008653ad8597fa05849a7ff904ec7de6 WatchSource:0}: Error finding container 3fdaf5035e4d992e86d127ee8a8c1ef7008653ad8597fa05849a7ff904ec7de6: Status 404 returned error can't find the container with id 3fdaf5035e4d992e86d127ee8a8c1ef7008653ad8597fa05849a7ff904ec7de6 Dec 06 09:30:01 crc kubenswrapper[4895]: I1206 09:30:01.640918 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w"] Dec 06 09:30:02 crc kubenswrapper[4895]: I1206 09:30:02.339561 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" event={"ID":"3de51898-9fcb-4640-8e5b-710a2d1588e5","Type":"ContainerStarted","Data":"734d6663f4ecbba1eb1830cbd25747bad816f76a30561f763406f500d68165f9"} Dec 06 09:30:02 crc kubenswrapper[4895]: I1206 09:30:02.340435 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" event={"ID":"3de51898-9fcb-4640-8e5b-710a2d1588e5","Type":"ContainerStarted","Data":"3fdaf5035e4d992e86d127ee8a8c1ef7008653ad8597fa05849a7ff904ec7de6"} Dec 06 09:30:02 crc kubenswrapper[4895]: I1206 09:30:02.363410 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" podStartSLOduration=2.363388384 podStartE2EDuration="2.363388384s" podCreationTimestamp="2025-12-06 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:30:02.352737384 +0000 UTC m=+9164.754126254" watchObservedRunningTime="2025-12-06 09:30:02.363388384 +0000 UTC m=+9164.764777254" Dec 06 09:30:03 crc kubenswrapper[4895]: I1206 09:30:03.350466 4895 generic.go:334] "Generic (PLEG): container finished" podID="3de51898-9fcb-4640-8e5b-710a2d1588e5" containerID="734d6663f4ecbba1eb1830cbd25747bad816f76a30561f763406f500d68165f9" exitCode=0 Dec 06 09:30:03 crc kubenswrapper[4895]: I1206 09:30:03.350545 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" event={"ID":"3de51898-9fcb-4640-8e5b-710a2d1588e5","Type":"ContainerDied","Data":"734d6663f4ecbba1eb1830cbd25747bad816f76a30561f763406f500d68165f9"} Dec 06 09:30:03 crc kubenswrapper[4895]: I1206 09:30:03.352945 4895 generic.go:334] "Generic (PLEG): container finished" podID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerID="b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639" exitCode=0 Dec 06 09:30:03 crc kubenswrapper[4895]: I1206 09:30:03.352988 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerDied","Data":"b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639"} Dec 06 09:30:03 crc kubenswrapper[4895]: I1206 09:30:03.356029 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:30:04 crc kubenswrapper[4895]: I1206 09:30:04.367679 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerStarted","Data":"94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248"} Dec 06 09:30:04 crc kubenswrapper[4895]: I1206 09:30:04.400359 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k884m" podStartSLOduration=3.916500556 podStartE2EDuration="10.400341784s" podCreationTimestamp="2025-12-06 09:29:54 +0000 UTC" firstStartedPulling="2025-12-06 09:29:57.282878615 +0000 UTC m=+9159.684267485" lastFinishedPulling="2025-12-06 09:30:03.766719843 +0000 UTC m=+9166.168108713" observedRunningTime="2025-12-06 09:30:04.38337651 +0000 UTC m=+9166.784765380" watchObservedRunningTime="2025-12-06 09:30:04.400341784 +0000 UTC m=+9166.801730654" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.289894 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.290453 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.369865 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.379426 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" event={"ID":"3de51898-9fcb-4640-8e5b-710a2d1588e5","Type":"ContainerDied","Data":"3fdaf5035e4d992e86d127ee8a8c1ef7008653ad8597fa05849a7ff904ec7de6"} Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.379463 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.379469 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdaf5035e4d992e86d127ee8a8c1ef7008653ad8597fa05849a7ff904ec7de6" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.531383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlj5s\" (UniqueName: \"kubernetes.io/projected/3de51898-9fcb-4640-8e5b-710a2d1588e5-kube-api-access-tlj5s\") pod \"3de51898-9fcb-4640-8e5b-710a2d1588e5\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.531836 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3de51898-9fcb-4640-8e5b-710a2d1588e5-config-volume\") pod \"3de51898-9fcb-4640-8e5b-710a2d1588e5\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.532020 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3de51898-9fcb-4640-8e5b-710a2d1588e5-secret-volume\") pod \"3de51898-9fcb-4640-8e5b-710a2d1588e5\" (UID: \"3de51898-9fcb-4640-8e5b-710a2d1588e5\") " Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.532639 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de51898-9fcb-4640-8e5b-710a2d1588e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "3de51898-9fcb-4640-8e5b-710a2d1588e5" (UID: "3de51898-9fcb-4640-8e5b-710a2d1588e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.536963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de51898-9fcb-4640-8e5b-710a2d1588e5-kube-api-access-tlj5s" (OuterVolumeSpecName: "kube-api-access-tlj5s") pod "3de51898-9fcb-4640-8e5b-710a2d1588e5" (UID: "3de51898-9fcb-4640-8e5b-710a2d1588e5"). InnerVolumeSpecName "kube-api-access-tlj5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.542787 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de51898-9fcb-4640-8e5b-710a2d1588e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3de51898-9fcb-4640-8e5b-710a2d1588e5" (UID: "3de51898-9fcb-4640-8e5b-710a2d1588e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.634686 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3de51898-9fcb-4640-8e5b-710a2d1588e5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.634726 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlj5s\" (UniqueName: \"kubernetes.io/projected/3de51898-9fcb-4640-8e5b-710a2d1588e5-kube-api-access-tlj5s\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:05 crc kubenswrapper[4895]: I1206 09:30:05.634737 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3de51898-9fcb-4640-8e5b-710a2d1588e5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:06 crc kubenswrapper[4895]: I1206 09:30:06.365274 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k884m" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="registry-server" probeResult="failure" output=< Dec 06 09:30:06 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:30:06 crc kubenswrapper[4895]: > Dec 06 09:30:06 crc kubenswrapper[4895]: I1206 09:30:06.904842 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6"] Dec 06 09:30:06 crc kubenswrapper[4895]: I1206 09:30:06.916338 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416845-kcsp6"] Dec 06 09:30:08 crc kubenswrapper[4895]: I1206 09:30:08.068311 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82be63b9-7efd-46d2-88c6-e1fd8f2b58f7" path="/var/lib/kubelet/pods/82be63b9-7efd-46d2-88c6-e1fd8f2b58f7/volumes" Dec 06 09:30:15 crc kubenswrapper[4895]: I1206 09:30:15.356239 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:30:15 crc kubenswrapper[4895]: I1206 09:30:15.430618 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:30:15 crc kubenswrapper[4895]: I1206 09:30:15.605907 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k884m"] Dec 06 09:30:16 crc kubenswrapper[4895]: I1206 09:30:16.495739 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k884m" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="registry-server" containerID="cri-o://94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248" gracePeriod=2 Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.323132 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.471706 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-utilities\") pod \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.471790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-catalog-content\") pod \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.471861 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvppw\" (UniqueName: \"kubernetes.io/projected/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-kube-api-access-bvppw\") pod \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\" (UID: \"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9\") " Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.472732 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-utilities" (OuterVolumeSpecName: "utilities") pod "059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" (UID: "059f2d6a-1e59-40b4-9656-3ec7ed5e11c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.493403 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-kube-api-access-bvppw" (OuterVolumeSpecName: "kube-api-access-bvppw") pod "059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" (UID: "059f2d6a-1e59-40b4-9656-3ec7ed5e11c9"). InnerVolumeSpecName "kube-api-access-bvppw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.509003 4895 generic.go:334] "Generic (PLEG): container finished" podID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerID="94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248" exitCode=0 Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.509129 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k884m" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.509130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerDied","Data":"94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248"} Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.509961 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k884m" event={"ID":"059f2d6a-1e59-40b4-9656-3ec7ed5e11c9","Type":"ContainerDied","Data":"b7405d67e55789a5162234204909963a3b59da3a41447be8f8b4236f6cd47778"} Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.509999 4895 scope.go:117] "RemoveContainer" containerID="94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.543423 4895 scope.go:117] "RemoveContainer" containerID="b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.564593 4895 scope.go:117] "RemoveContainer" containerID="52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.574700 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvppw\" (UniqueName: \"kubernetes.io/projected/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-kube-api-access-bvppw\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.574741 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.592163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" (UID: "059f2d6a-1e59-40b4-9656-3ec7ed5e11c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.617431 4895 scope.go:117] "RemoveContainer" containerID="94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248" Dec 06 09:30:17 crc kubenswrapper[4895]: E1206 09:30:17.617981 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248\": container with ID starting with 94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248 not found: ID does not exist" containerID="94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.618015 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248"} err="failed to get container status \"94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248\": rpc error: code = NotFound desc = could not find container \"94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248\": container with ID starting with 94823d87a8b95a7f67e5c201c83c44426f68ba15d13312075b1e15d16e956248 not found: ID does not exist" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.618041 4895 scope.go:117] "RemoveContainer" containerID="b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639" Dec 06 09:30:17 crc kubenswrapper[4895]: E1206 09:30:17.618416 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639\": container with ID starting with b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639 not found: ID does not exist" containerID="b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.618459 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639"} err="failed to get container status \"b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639\": rpc error: code = NotFound desc = could not find container \"b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639\": container with ID starting with b0bae86af9cabf7e0418975ca767167d77567882369c855ab96cbc39961d3639 not found: ID does not exist" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.618501 4895 scope.go:117] "RemoveContainer" containerID="52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6" Dec 06 09:30:17 crc kubenswrapper[4895]: E1206 09:30:17.618846 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6\": container with ID starting with 52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6 not found: ID does not exist" containerID="52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.618873 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6"} err="failed to get container status \"52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6\": rpc error: code = NotFound desc = could not find container \"52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6\": container with ID starting with 52a54edc5b4e118f97177e818cdb8c8619b2fa725f8a0898145e04fceb5564a6 not found: ID does not exist" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.677320 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.895064 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k884m"] Dec 06 09:30:17 crc kubenswrapper[4895]: I1206 09:30:17.911731 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k884m"] Dec 06 09:30:18 crc kubenswrapper[4895]: I1206 09:30:18.067505 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" path="/var/lib/kubelet/pods/059f2d6a-1e59-40b4-9656-3ec7ed5e11c9/volumes" Dec 06 09:30:18 crc kubenswrapper[4895]: E1206 09:30:18.125749 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059f2d6a_1e59_40b4_9656_3ec7ed5e11c9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059f2d6a_1e59_40b4_9656_3ec7ed5e11c9.slice/crio-b7405d67e55789a5162234204909963a3b59da3a41447be8f8b4236f6cd47778\": RecentStats: unable to find data in memory cache]" Dec 06 09:30:20 crc kubenswrapper[4895]: I1206 09:30:20.673877 4895 scope.go:117] "RemoveContainer" containerID="eaea10e0575ae5c82698586f31e98b77e0886d804fccdcc61a7d323f5e8d8191" Dec 06 09:30:29 crc kubenswrapper[4895]: I1206 09:30:29.644639 4895 generic.go:334] "Generic (PLEG): container finished" podID="cfc5ef76-e849-4740-93de-d9490d688654" containerID="c84c23104657881d75a4da073e53d2bd0b5b73d86c201c60877da45eac2a248f" exitCode=0 Dec 06 09:30:29 crc kubenswrapper[4895]: I1206 09:30:29.644693 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" event={"ID":"cfc5ef76-e849-4740-93de-d9490d688654","Type":"ContainerDied","Data":"c84c23104657881d75a4da073e53d2bd0b5b73d86c201c60877da45eac2a248f"} Dec 06 09:30:29 crc kubenswrapper[4895]: I1206 09:30:29.695409 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:30:29 crc kubenswrapper[4895]: I1206 09:30:29.695484 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.397236 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.586523 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/cfc5ef76-e849-4740-93de-d9490d688654-kube-api-access-lq9hc\") pod \"cfc5ef76-e849-4740-93de-d9490d688654\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.586598 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-ssh-key\") pod \"cfc5ef76-e849-4740-93de-d9490d688654\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.586735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-inventory\") pod \"cfc5ef76-e849-4740-93de-d9490d688654\" (UID: \"cfc5ef76-e849-4740-93de-d9490d688654\") " Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.592852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc5ef76-e849-4740-93de-d9490d688654-kube-api-access-lq9hc" (OuterVolumeSpecName: "kube-api-access-lq9hc") pod "cfc5ef76-e849-4740-93de-d9490d688654" (UID: "cfc5ef76-e849-4740-93de-d9490d688654"). InnerVolumeSpecName "kube-api-access-lq9hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.619287 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cfc5ef76-e849-4740-93de-d9490d688654" (UID: "cfc5ef76-e849-4740-93de-d9490d688654"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.623668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-inventory" (OuterVolumeSpecName: "inventory") pod "cfc5ef76-e849-4740-93de-d9490d688654" (UID: "cfc5ef76-e849-4740-93de-d9490d688654"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.668429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" event={"ID":"cfc5ef76-e849-4740-93de-d9490d688654","Type":"ContainerDied","Data":"c74d2ce752afe090c4b3df75772a8967dc32c1e03de7424bfbe305707188f1c6"} Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.668505 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74d2ce752afe090c4b3df75772a8967dc32c1e03de7424bfbe305707188f1c6" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.668513 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-7d5jm" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.689567 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/cfc5ef76-e849-4740-93de-d9490d688654-kube-api-access-lq9hc\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.689602 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.689612 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfc5ef76-e849-4740-93de-d9490d688654-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.765443 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-networker-mxpq9"] Dec 06 09:30:31 crc kubenswrapper[4895]: E1206 09:30:31.765969 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="extract-utilities" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.765992 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="extract-utilities" Dec 06 09:30:31 crc kubenswrapper[4895]: E1206 09:30:31.766023 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc5ef76-e849-4740-93de-d9490d688654" containerName="configure-network-openstack-openstack-networker" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766030 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc5ef76-e849-4740-93de-d9490d688654" containerName="configure-network-openstack-openstack-networker" Dec 06 09:30:31 crc kubenswrapper[4895]: E1206 09:30:31.766046 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="extract-content" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766053 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="extract-content" Dec 06 09:30:31 crc kubenswrapper[4895]: E1206 09:30:31.766071 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="registry-server" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766077 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="registry-server" Dec 06 09:30:31 crc kubenswrapper[4895]: E1206 09:30:31.766088 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de51898-9fcb-4640-8e5b-710a2d1588e5" containerName="collect-profiles" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766094 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de51898-9fcb-4640-8e5b-710a2d1588e5" containerName="collect-profiles" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766286 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc5ef76-e849-4740-93de-d9490d688654" containerName="configure-network-openstack-openstack-networker" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766303 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de51898-9fcb-4640-8e5b-710a2d1588e5" containerName="collect-profiles" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.766316 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="059f2d6a-1e59-40b4-9656-3ec7ed5e11c9" containerName="registry-server" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.767145 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.769862 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.770818 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.781497 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-networker-mxpq9"] Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.846226 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfnz\" (UniqueName: \"kubernetes.io/projected/d70176ac-e248-435a-a834-1558c9f382d2-kube-api-access-lwfnz\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.846375 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-ssh-key\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.846413 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-inventory\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.948755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfnz\" (UniqueName: \"kubernetes.io/projected/d70176ac-e248-435a-a834-1558c9f382d2-kube-api-access-lwfnz\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.949224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-ssh-key\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.949940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-inventory\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.958438 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-ssh-key\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.958940 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-inventory\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:31 crc kubenswrapper[4895]: I1206 09:30:31.970851 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfnz\" (UniqueName: \"kubernetes.io/projected/d70176ac-e248-435a-a834-1558c9f382d2-kube-api-access-lwfnz\") pod \"validate-network-openstack-openstack-networker-mxpq9\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:32 crc kubenswrapper[4895]: I1206 09:30:32.087663 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:32 crc kubenswrapper[4895]: I1206 09:30:32.617259 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-networker-mxpq9"] Dec 06 09:30:32 crc kubenswrapper[4895]: I1206 09:30:32.678805 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" event={"ID":"d70176ac-e248-435a-a834-1558c9f382d2","Type":"ContainerStarted","Data":"1dc79c9cde36c0393aacb2c907faf9a5fd0ad0a3ad989035cac3ac4f0de9ffb4"} Dec 06 09:30:33 crc kubenswrapper[4895]: I1206 09:30:33.691655 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" event={"ID":"d70176ac-e248-435a-a834-1558c9f382d2","Type":"ContainerStarted","Data":"91f421c2015a2803fad790e03a6f25830c9a1e0c9470b45d61caabfa930416e5"} Dec 06 09:30:33 crc kubenswrapper[4895]: I1206 09:30:33.716593 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" podStartSLOduration=2.231605288 podStartE2EDuration="2.716569987s" podCreationTimestamp="2025-12-06 09:30:31 +0000 UTC" firstStartedPulling="2025-12-06 09:30:32.623046422 +0000 UTC m=+9195.024435302" lastFinishedPulling="2025-12-06 09:30:33.108011131 +0000 UTC m=+9195.509400001" observedRunningTime="2025-12-06 09:30:33.70823275 +0000 UTC m=+9196.109621630" watchObservedRunningTime="2025-12-06 09:30:33.716569987 +0000 UTC m=+9196.117958857" Dec 06 09:30:38 crc kubenswrapper[4895]: I1206 09:30:38.746147 4895 generic.go:334] "Generic (PLEG): container finished" podID="d70176ac-e248-435a-a834-1558c9f382d2" containerID="91f421c2015a2803fad790e03a6f25830c9a1e0c9470b45d61caabfa930416e5" exitCode=0 Dec 06 09:30:38 crc kubenswrapper[4895]: I1206 09:30:38.746260 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" event={"ID":"d70176ac-e248-435a-a834-1558c9f382d2","Type":"ContainerDied","Data":"91f421c2015a2803fad790e03a6f25830c9a1e0c9470b45d61caabfa930416e5"} Dec 06 09:30:40 crc kubenswrapper[4895]: I1206 09:30:40.910287 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.055154 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-ssh-key\") pod \"d70176ac-e248-435a-a834-1558c9f382d2\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.055262 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-inventory\") pod \"d70176ac-e248-435a-a834-1558c9f382d2\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.055317 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfnz\" (UniqueName: \"kubernetes.io/projected/d70176ac-e248-435a-a834-1558c9f382d2-kube-api-access-lwfnz\") pod \"d70176ac-e248-435a-a834-1558c9f382d2\" (UID: \"d70176ac-e248-435a-a834-1558c9f382d2\") " Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.061028 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70176ac-e248-435a-a834-1558c9f382d2-kube-api-access-lwfnz" (OuterVolumeSpecName: "kube-api-access-lwfnz") pod "d70176ac-e248-435a-a834-1558c9f382d2" (UID: "d70176ac-e248-435a-a834-1558c9f382d2"). InnerVolumeSpecName "kube-api-access-lwfnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.086644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-inventory" (OuterVolumeSpecName: "inventory") pod "d70176ac-e248-435a-a834-1558c9f382d2" (UID: "d70176ac-e248-435a-a834-1558c9f382d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.089577 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d70176ac-e248-435a-a834-1558c9f382d2" (UID: "d70176ac-e248-435a-a834-1558c9f382d2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.160693 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.161086 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70176ac-e248-435a-a834-1558c9f382d2-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.161116 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfnz\" (UniqueName: \"kubernetes.io/projected/d70176ac-e248-435a-a834-1558c9f382d2-kube-api-access-lwfnz\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.779576 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" event={"ID":"d70176ac-e248-435a-a834-1558c9f382d2","Type":"ContainerDied","Data":"1dc79c9cde36c0393aacb2c907faf9a5fd0ad0a3ad989035cac3ac4f0de9ffb4"} Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.779632 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc79c9cde36c0393aacb2c907faf9a5fd0ad0a3ad989035cac3ac4f0de9ffb4" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.779637 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-mxpq9" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.983013 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-networker-db6d6"] Dec 06 09:30:41 crc kubenswrapper[4895]: E1206 09:30:41.983399 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70176ac-e248-435a-a834-1558c9f382d2" containerName="validate-network-openstack-openstack-networker" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.983413 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70176ac-e248-435a-a834-1558c9f382d2" containerName="validate-network-openstack-openstack-networker" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.983648 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70176ac-e248-435a-a834-1558c9f382d2" containerName="validate-network-openstack-openstack-networker" Dec 06 09:30:41 crc kubenswrapper[4895]: I1206 09:30:41.984613 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.000228 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.000382 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.063401 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-networker-db6d6"] Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.189256 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-ssh-key\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.191114 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq96\" (UniqueName: \"kubernetes.io/projected/6ba7963e-83f5-4e85-befa-41d58e25787d-kube-api-access-jsq96\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.192277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-inventory\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.293742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-ssh-key\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.293854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsq96\" (UniqueName: \"kubernetes.io/projected/6ba7963e-83f5-4e85-befa-41d58e25787d-kube-api-access-jsq96\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.293977 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-inventory\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.300372 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-inventory\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.307158 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-ssh-key\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.320201 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsq96\" (UniqueName: \"kubernetes.io/projected/6ba7963e-83f5-4e85-befa-41d58e25787d-kube-api-access-jsq96\") pod \"install-os-openstack-openstack-networker-db6d6\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.320978 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:30:42 crc kubenswrapper[4895]: I1206 09:30:42.878041 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-networker-db6d6"] Dec 06 09:30:43 crc kubenswrapper[4895]: I1206 09:30:43.828363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-db6d6" event={"ID":"6ba7963e-83f5-4e85-befa-41d58e25787d","Type":"ContainerStarted","Data":"7a52c1bb8026af71e3de330d8f87389ab096ce046505a248c0ebdc9ddff310a6"} Dec 06 09:30:44 crc kubenswrapper[4895]: I1206 09:30:44.844725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-db6d6" event={"ID":"6ba7963e-83f5-4e85-befa-41d58e25787d","Type":"ContainerStarted","Data":"de61a0bcfad7143c0b3064b9954763caa9752b5928bec6d1a948c0b0286a275d"} Dec 06 09:30:44 crc kubenswrapper[4895]: I1206 09:30:44.865376 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-networker-db6d6" podStartSLOduration=3.426649066 podStartE2EDuration="3.865354772s" podCreationTimestamp="2025-12-06 09:30:41 +0000 UTC" firstStartedPulling="2025-12-06 09:30:43.204727863 +0000 UTC m=+9205.606116783" lastFinishedPulling="2025-12-06 09:30:43.643433619 +0000 UTC m=+9206.044822489" observedRunningTime="2025-12-06 09:30:44.863874251 +0000 UTC m=+9207.265263121" watchObservedRunningTime="2025-12-06 09:30:44.865354772 +0000 UTC m=+9207.266743642" Dec 06 09:30:50 crc kubenswrapper[4895]: I1206 09:30:50.909897 4895 generic.go:334] "Generic (PLEG): container finished" podID="46f96e8a-cddb-4908-8ced-54dd4fcb7731" containerID="a907e3d1cad8e8da957fb98efd596bf16685f83d33dcb42979ceb526f1d45d97" exitCode=0 Dec 06 09:30:50 crc kubenswrapper[4895]: I1206 09:30:50.910075 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" event={"ID":"46f96e8a-cddb-4908-8ced-54dd4fcb7731","Type":"ContainerDied","Data":"a907e3d1cad8e8da957fb98efd596bf16685f83d33dcb42979ceb526f1d45d97"} Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.412775 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.524333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ssh-key\") pod \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.524532 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjdn8\" (UniqueName: \"kubernetes.io/projected/46f96e8a-cddb-4908-8ced-54dd4fcb7731-kube-api-access-tjdn8\") pod \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.524872 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-inventory\") pod \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.525103 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ceph\") pod \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\" (UID: \"46f96e8a-cddb-4908-8ced-54dd4fcb7731\") " Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.529367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ceph" (OuterVolumeSpecName: "ceph") pod "46f96e8a-cddb-4908-8ced-54dd4fcb7731" (UID: "46f96e8a-cddb-4908-8ced-54dd4fcb7731"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.529438 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f96e8a-cddb-4908-8ced-54dd4fcb7731-kube-api-access-tjdn8" (OuterVolumeSpecName: "kube-api-access-tjdn8") pod "46f96e8a-cddb-4908-8ced-54dd4fcb7731" (UID: "46f96e8a-cddb-4908-8ced-54dd4fcb7731"). InnerVolumeSpecName "kube-api-access-tjdn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.553892 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "46f96e8a-cddb-4908-8ced-54dd4fcb7731" (UID: "46f96e8a-cddb-4908-8ced-54dd4fcb7731"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.571147 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-inventory" (OuterVolumeSpecName: "inventory") pod "46f96e8a-cddb-4908-8ced-54dd4fcb7731" (UID: "46f96e8a-cddb-4908-8ced-54dd4fcb7731"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.627977 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.628023 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.628033 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46f96e8a-cddb-4908-8ced-54dd4fcb7731-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.628046 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjdn8\" (UniqueName: \"kubernetes.io/projected/46f96e8a-cddb-4908-8ced-54dd4fcb7731-kube-api-access-tjdn8\") on node \"crc\" DevicePath \"\"" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.930533 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.930463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-tmht6" event={"ID":"46f96e8a-cddb-4908-8ced-54dd4fcb7731","Type":"ContainerDied","Data":"b454878299ffcc9ebdbbd883ad6586d1d70f13ab6342a98bfec16bea7a390765"} Dec 06 09:30:52 crc kubenswrapper[4895]: I1206 09:30:52.930684 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b454878299ffcc9ebdbbd883ad6586d1d70f13ab6342a98bfec16bea7a390765" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.027717 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-bbj9x"] Dec 06 09:30:53 crc kubenswrapper[4895]: E1206 09:30:53.028251 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f96e8a-cddb-4908-8ced-54dd4fcb7731" containerName="configure-network-openstack-openstack-cell1" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.028670 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f96e8a-cddb-4908-8ced-54dd4fcb7731" containerName="configure-network-openstack-openstack-cell1" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.028966 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f96e8a-cddb-4908-8ced-54dd4fcb7731" containerName="configure-network-openstack-openstack-cell1" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.030176 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.033164 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.034365 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.042930 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-bbj9x"] Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.140572 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdr62\" (UniqueName: \"kubernetes.io/projected/f4f06381-fe11-443f-a2cd-5f4dd0b39394-kube-api-access-tdr62\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.140666 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-inventory\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.141222 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ceph\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.141860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ssh-key\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.243625 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ssh-key\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.243746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdr62\" (UniqueName: \"kubernetes.io/projected/f4f06381-fe11-443f-a2cd-5f4dd0b39394-kube-api-access-tdr62\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.243780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-inventory\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.243900 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ceph\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.252386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ceph\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.254922 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ssh-key\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.255907 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-inventory\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.261751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdr62\" (UniqueName: \"kubernetes.io/projected/f4f06381-fe11-443f-a2cd-5f4dd0b39394-kube-api-access-tdr62\") pod \"validate-network-openstack-openstack-cell1-bbj9x\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.352573 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:30:53 crc kubenswrapper[4895]: I1206 09:30:53.952500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-bbj9x"] Dec 06 09:30:54 crc kubenswrapper[4895]: W1206 09:30:54.101716 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4f06381_fe11_443f_a2cd_5f4dd0b39394.slice/crio-7d35eda8519d1d41538945386165ec4c6c6a928a79b84d79cbec5319fa0fdbf0 WatchSource:0}: Error finding container 7d35eda8519d1d41538945386165ec4c6c6a928a79b84d79cbec5319fa0fdbf0: Status 404 returned error can't find the container with id 7d35eda8519d1d41538945386165ec4c6c6a928a79b84d79cbec5319fa0fdbf0 Dec 06 09:30:54 crc kubenswrapper[4895]: I1206 09:30:54.952056 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" event={"ID":"f4f06381-fe11-443f-a2cd-5f4dd0b39394","Type":"ContainerStarted","Data":"e48811528aa8d2defabcad07bb039c4d42cf9a92e54bb89bdf78b38093fdaf87"} Dec 06 09:30:54 crc kubenswrapper[4895]: I1206 09:30:54.952414 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" event={"ID":"f4f06381-fe11-443f-a2cd-5f4dd0b39394","Type":"ContainerStarted","Data":"7d35eda8519d1d41538945386165ec4c6c6a928a79b84d79cbec5319fa0fdbf0"} Dec 06 09:30:54 crc kubenswrapper[4895]: I1206 09:30:54.980051 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" podStartSLOduration=1.468290353 podStartE2EDuration="1.980031083s" podCreationTimestamp="2025-12-06 09:30:53 +0000 UTC" firstStartedPulling="2025-12-06 09:30:54.108100262 +0000 UTC m=+9216.509489132" lastFinishedPulling="2025-12-06 09:30:54.619840982 +0000 UTC m=+9217.021229862" observedRunningTime="2025-12-06 09:30:54.969183257 +0000 UTC m=+9217.370572127" watchObservedRunningTime="2025-12-06 09:30:54.980031083 +0000 UTC m=+9217.381419943" Dec 06 09:30:59 crc kubenswrapper[4895]: I1206 09:30:59.695649 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:30:59 crc kubenswrapper[4895]: I1206 09:30:59.696005 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:30:59 crc kubenswrapper[4895]: I1206 09:30:59.696047 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:30:59 crc kubenswrapper[4895]: I1206 09:30:59.696820 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffdc2a939a93f933962849d90f631256edf91007e25ce39191e7dad4620ed7f2"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:30:59 crc kubenswrapper[4895]: I1206 09:30:59.696869 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://ffdc2a939a93f933962849d90f631256edf91007e25ce39191e7dad4620ed7f2" gracePeriod=600 Dec 06 09:31:00 crc kubenswrapper[4895]: I1206 09:31:00.007254 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="ffdc2a939a93f933962849d90f631256edf91007e25ce39191e7dad4620ed7f2" exitCode=0 Dec 06 09:31:00 crc kubenswrapper[4895]: I1206 09:31:00.007319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"ffdc2a939a93f933962849d90f631256edf91007e25ce39191e7dad4620ed7f2"} Dec 06 09:31:00 crc kubenswrapper[4895]: I1206 09:31:00.007844 4895 scope.go:117] "RemoveContainer" containerID="e37824d642e0bd32587013dfd7eaa6d0dd136fd774560fc70b36b6cdf2ba553f" Dec 06 09:31:01 crc kubenswrapper[4895]: I1206 09:31:01.019109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992"} Dec 06 09:31:02 crc kubenswrapper[4895]: I1206 09:31:02.030797 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4f06381-fe11-443f-a2cd-5f4dd0b39394" containerID="e48811528aa8d2defabcad07bb039c4d42cf9a92e54bb89bdf78b38093fdaf87" exitCode=0 Dec 06 09:31:02 crc kubenswrapper[4895]: I1206 09:31:02.030914 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" event={"ID":"f4f06381-fe11-443f-a2cd-5f4dd0b39394","Type":"ContainerDied","Data":"e48811528aa8d2defabcad07bb039c4d42cf9a92e54bb89bdf78b38093fdaf87"} Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.517947 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.695621 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-inventory\") pod \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.695796 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ceph\") pod \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.695839 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdr62\" (UniqueName: \"kubernetes.io/projected/f4f06381-fe11-443f-a2cd-5f4dd0b39394-kube-api-access-tdr62\") pod \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.696015 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ssh-key\") pod \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\" (UID: \"f4f06381-fe11-443f-a2cd-5f4dd0b39394\") " Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.703112 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f06381-fe11-443f-a2cd-5f4dd0b39394-kube-api-access-tdr62" (OuterVolumeSpecName: "kube-api-access-tdr62") pod "f4f06381-fe11-443f-a2cd-5f4dd0b39394" (UID: "f4f06381-fe11-443f-a2cd-5f4dd0b39394"). InnerVolumeSpecName "kube-api-access-tdr62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.703605 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ceph" (OuterVolumeSpecName: "ceph") pod "f4f06381-fe11-443f-a2cd-5f4dd0b39394" (UID: "f4f06381-fe11-443f-a2cd-5f4dd0b39394"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.727677 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-inventory" (OuterVolumeSpecName: "inventory") pod "f4f06381-fe11-443f-a2cd-5f4dd0b39394" (UID: "f4f06381-fe11-443f-a2cd-5f4dd0b39394"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.728571 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4f06381-fe11-443f-a2cd-5f4dd0b39394" (UID: "f4f06381-fe11-443f-a2cd-5f4dd0b39394"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.798843 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.798880 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdr62\" (UniqueName: \"kubernetes.io/projected/f4f06381-fe11-443f-a2cd-5f4dd0b39394-kube-api-access-tdr62\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.798892 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:03 crc kubenswrapper[4895]: I1206 09:31:03.798901 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f06381-fe11-443f-a2cd-5f4dd0b39394-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.055906 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.065656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-bbj9x" event={"ID":"f4f06381-fe11-443f-a2cd-5f4dd0b39394","Type":"ContainerDied","Data":"7d35eda8519d1d41538945386165ec4c6c6a928a79b84d79cbec5319fa0fdbf0"} Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.066003 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d35eda8519d1d41538945386165ec4c6c6a928a79b84d79cbec5319fa0fdbf0" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.128671 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-tp2cg"] Dec 06 09:31:04 crc kubenswrapper[4895]: E1206 09:31:04.129162 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f06381-fe11-443f-a2cd-5f4dd0b39394" containerName="validate-network-openstack-openstack-cell1" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.129182 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f06381-fe11-443f-a2cd-5f4dd0b39394" containerName="validate-network-openstack-openstack-cell1" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.129377 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f06381-fe11-443f-a2cd-5f4dd0b39394" containerName="validate-network-openstack-openstack-cell1" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.130154 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.132576 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.134636 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.157039 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-tp2cg"] Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.308689 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-inventory\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.308767 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2wp\" (UniqueName: \"kubernetes.io/projected/519f3967-5e5a-4330-91f0-95a92fd3de83-kube-api-access-2w2wp\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.308814 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ssh-key\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.308903 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ceph\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.410812 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-inventory\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.410951 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2wp\" (UniqueName: \"kubernetes.io/projected/519f3967-5e5a-4330-91f0-95a92fd3de83-kube-api-access-2w2wp\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.411022 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ssh-key\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.411166 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ceph\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.418146 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ceph\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.421540 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-inventory\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.422174 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ssh-key\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.444887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2wp\" (UniqueName: \"kubernetes.io/projected/519f3967-5e5a-4330-91f0-95a92fd3de83-kube-api-access-2w2wp\") pod \"install-os-openstack-openstack-cell1-tp2cg\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:04 crc kubenswrapper[4895]: I1206 09:31:04.458047 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:05 crc kubenswrapper[4895]: I1206 09:31:05.029758 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-tp2cg"] Dec 06 09:31:05 crc kubenswrapper[4895]: I1206 09:31:05.068076 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" event={"ID":"519f3967-5e5a-4330-91f0-95a92fd3de83","Type":"ContainerStarted","Data":"9b1ff4eb9aaebd48ccb5581e9870179bdacffae53757f7cfa3550e87e1b5884f"} Dec 06 09:31:06 crc kubenswrapper[4895]: I1206 09:31:06.087221 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" event={"ID":"519f3967-5e5a-4330-91f0-95a92fd3de83","Type":"ContainerStarted","Data":"1d3d2eabb66036517dcecb8f685d96e8da822b9e6984e21e2af8389d3a50538f"} Dec 06 09:31:06 crc kubenswrapper[4895]: I1206 09:31:06.117866 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" podStartSLOduration=1.714996889 podStartE2EDuration="2.117828875s" podCreationTimestamp="2025-12-06 09:31:04 +0000 UTC" firstStartedPulling="2025-12-06 09:31:05.030772127 +0000 UTC m=+9227.432161007" lastFinishedPulling="2025-12-06 09:31:05.433604123 +0000 UTC m=+9227.834992993" observedRunningTime="2025-12-06 09:31:06.111996056 +0000 UTC m=+9228.513384956" watchObservedRunningTime="2025-12-06 09:31:06.117828875 +0000 UTC m=+9228.519217745" Dec 06 09:31:34 crc kubenswrapper[4895]: I1206 09:31:34.418940 4895 generic.go:334] "Generic (PLEG): container finished" podID="6ba7963e-83f5-4e85-befa-41d58e25787d" containerID="de61a0bcfad7143c0b3064b9954763caa9752b5928bec6d1a948c0b0286a275d" exitCode=0 Dec 06 09:31:34 crc kubenswrapper[4895]: I1206 09:31:34.419013 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-db6d6" event={"ID":"6ba7963e-83f5-4e85-befa-41d58e25787d","Type":"ContainerDied","Data":"de61a0bcfad7143c0b3064b9954763caa9752b5928bec6d1a948c0b0286a275d"} Dec 06 09:31:35 crc kubenswrapper[4895]: I1206 09:31:35.857193 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.043054 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-ssh-key\") pod \"6ba7963e-83f5-4e85-befa-41d58e25787d\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.043422 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-inventory\") pod \"6ba7963e-83f5-4e85-befa-41d58e25787d\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.043494 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsq96\" (UniqueName: \"kubernetes.io/projected/6ba7963e-83f5-4e85-befa-41d58e25787d-kube-api-access-jsq96\") pod \"6ba7963e-83f5-4e85-befa-41d58e25787d\" (UID: \"6ba7963e-83f5-4e85-befa-41d58e25787d\") " Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.049642 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba7963e-83f5-4e85-befa-41d58e25787d-kube-api-access-jsq96" (OuterVolumeSpecName: "kube-api-access-jsq96") pod "6ba7963e-83f5-4e85-befa-41d58e25787d" (UID: "6ba7963e-83f5-4e85-befa-41d58e25787d"). InnerVolumeSpecName "kube-api-access-jsq96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.069170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-inventory" (OuterVolumeSpecName: "inventory") pod "6ba7963e-83f5-4e85-befa-41d58e25787d" (UID: "6ba7963e-83f5-4e85-befa-41d58e25787d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.070930 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ba7963e-83f5-4e85-befa-41d58e25787d" (UID: "6ba7963e-83f5-4e85-befa-41d58e25787d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.146827 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.146864 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ba7963e-83f5-4e85-befa-41d58e25787d-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.146876 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsq96\" (UniqueName: \"kubernetes.io/projected/6ba7963e-83f5-4e85-befa-41d58e25787d-kube-api-access-jsq96\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.443299 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-db6d6" event={"ID":"6ba7963e-83f5-4e85-befa-41d58e25787d","Type":"ContainerDied","Data":"7a52c1bb8026af71e3de330d8f87389ab096ce046505a248c0ebdc9ddff310a6"} Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.443728 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a52c1bb8026af71e3de330d8f87389ab096ce046505a248c0ebdc9ddff310a6" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.443362 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-db6d6" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.544222 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-networker-l8jm8"] Dec 06 09:31:36 crc kubenswrapper[4895]: E1206 09:31:36.546893 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba7963e-83f5-4e85-befa-41d58e25787d" containerName="install-os-openstack-openstack-networker" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.546916 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba7963e-83f5-4e85-befa-41d58e25787d" containerName="install-os-openstack-openstack-networker" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.547123 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba7963e-83f5-4e85-befa-41d58e25787d" containerName="install-os-openstack-openstack-networker" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.547876 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.551991 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.552626 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.556177 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hjd\" (UniqueName: \"kubernetes.io/projected/9107701d-0557-4444-8997-8c70c8879415-kube-api-access-98hjd\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.556301 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-ssh-key\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.556366 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-inventory\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.563409 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-networker-l8jm8"] Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.659672 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-ssh-key\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.659753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-inventory\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.659891 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hjd\" (UniqueName: \"kubernetes.io/projected/9107701d-0557-4444-8997-8c70c8879415-kube-api-access-98hjd\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.672667 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-inventory\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.679920 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-ssh-key\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.681378 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hjd\" (UniqueName: \"kubernetes.io/projected/9107701d-0557-4444-8997-8c70c8879415-kube-api-access-98hjd\") pod \"configure-os-openstack-openstack-networker-l8jm8\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:36 crc kubenswrapper[4895]: I1206 09:31:36.882624 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:31:37 crc kubenswrapper[4895]: I1206 09:31:37.463671 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-networker-l8jm8"] Dec 06 09:31:38 crc kubenswrapper[4895]: I1206 09:31:38.464519 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" event={"ID":"9107701d-0557-4444-8997-8c70c8879415","Type":"ContainerStarted","Data":"8a6658bd22f30374041cb8dd2953b3c4966faad7f8a00efe009a44932e7afed4"} Dec 06 09:31:38 crc kubenswrapper[4895]: I1206 09:31:38.465101 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" event={"ID":"9107701d-0557-4444-8997-8c70c8879415","Type":"ContainerStarted","Data":"6dddd4e17f3fa4c051a1f2dcb51f3a4de73c915c985ba494356be94980fe157a"} Dec 06 09:31:38 crc kubenswrapper[4895]: I1206 09:31:38.491207 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" podStartSLOduration=2.09077429 podStartE2EDuration="2.491177559s" podCreationTimestamp="2025-12-06 09:31:36 +0000 UTC" firstStartedPulling="2025-12-06 09:31:37.471530792 +0000 UTC m=+9259.872919682" lastFinishedPulling="2025-12-06 09:31:37.871934081 +0000 UTC m=+9260.273322951" observedRunningTime="2025-12-06 09:31:38.481894786 +0000 UTC m=+9260.883283676" watchObservedRunningTime="2025-12-06 09:31:38.491177559 +0000 UTC m=+9260.892566449" Dec 06 09:31:56 crc kubenswrapper[4895]: I1206 09:31:56.676416 4895 generic.go:334] "Generic (PLEG): container finished" podID="519f3967-5e5a-4330-91f0-95a92fd3de83" containerID="1d3d2eabb66036517dcecb8f685d96e8da822b9e6984e21e2af8389d3a50538f" exitCode=0 Dec 06 09:31:56 crc kubenswrapper[4895]: I1206 09:31:56.676518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" event={"ID":"519f3967-5e5a-4330-91f0-95a92fd3de83","Type":"ContainerDied","Data":"1d3d2eabb66036517dcecb8f685d96e8da822b9e6984e21e2af8389d3a50538f"} Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.149040 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.208015 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ssh-key\") pod \"519f3967-5e5a-4330-91f0-95a92fd3de83\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.208097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w2wp\" (UniqueName: \"kubernetes.io/projected/519f3967-5e5a-4330-91f0-95a92fd3de83-kube-api-access-2w2wp\") pod \"519f3967-5e5a-4330-91f0-95a92fd3de83\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.208228 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ceph\") pod \"519f3967-5e5a-4330-91f0-95a92fd3de83\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.208308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-inventory\") pod \"519f3967-5e5a-4330-91f0-95a92fd3de83\" (UID: \"519f3967-5e5a-4330-91f0-95a92fd3de83\") " Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.214101 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519f3967-5e5a-4330-91f0-95a92fd3de83-kube-api-access-2w2wp" (OuterVolumeSpecName: "kube-api-access-2w2wp") pod "519f3967-5e5a-4330-91f0-95a92fd3de83" (UID: "519f3967-5e5a-4330-91f0-95a92fd3de83"). InnerVolumeSpecName "kube-api-access-2w2wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.216194 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ceph" (OuterVolumeSpecName: "ceph") pod "519f3967-5e5a-4330-91f0-95a92fd3de83" (UID: "519f3967-5e5a-4330-91f0-95a92fd3de83"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.239966 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "519f3967-5e5a-4330-91f0-95a92fd3de83" (UID: "519f3967-5e5a-4330-91f0-95a92fd3de83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.246344 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-inventory" (OuterVolumeSpecName: "inventory") pod "519f3967-5e5a-4330-91f0-95a92fd3de83" (UID: "519f3967-5e5a-4330-91f0-95a92fd3de83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.311539 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.311588 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.311606 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/519f3967-5e5a-4330-91f0-95a92fd3de83-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.311625 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w2wp\" (UniqueName: \"kubernetes.io/projected/519f3967-5e5a-4330-91f0-95a92fd3de83-kube-api-access-2w2wp\") on node \"crc\" DevicePath \"\"" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.713174 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" event={"ID":"519f3967-5e5a-4330-91f0-95a92fd3de83","Type":"ContainerDied","Data":"9b1ff4eb9aaebd48ccb5581e9870179bdacffae53757f7cfa3550e87e1b5884f"} Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.713240 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1ff4eb9aaebd48ccb5581e9870179bdacffae53757f7cfa3550e87e1b5884f" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.713239 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-tp2cg" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.792097 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-v76sx"] Dec 06 09:31:58 crc kubenswrapper[4895]: E1206 09:31:58.792966 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519f3967-5e5a-4330-91f0-95a92fd3de83" containerName="install-os-openstack-openstack-cell1" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.792993 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="519f3967-5e5a-4330-91f0-95a92fd3de83" containerName="install-os-openstack-openstack-cell1" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.793253 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="519f3967-5e5a-4330-91f0-95a92fd3de83" containerName="install-os-openstack-openstack-cell1" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.794217 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.796442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.796923 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.815506 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-v76sx"] Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.924197 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-inventory\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.924460 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjc7\" (UniqueName: \"kubernetes.io/projected/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-kube-api-access-8fjc7\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.924598 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ceph\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:58 crc kubenswrapper[4895]: I1206 09:31:58.924680 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ssh-key\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.026813 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fjc7\" (UniqueName: \"kubernetes.io/projected/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-kube-api-access-8fjc7\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.027060 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ceph\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.027094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ssh-key\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.027127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-inventory\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.030657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-inventory\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.030986 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ceph\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.038914 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ssh-key\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.043588 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fjc7\" (UniqueName: \"kubernetes.io/projected/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-kube-api-access-8fjc7\") pod \"configure-os-openstack-openstack-cell1-v76sx\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.121199 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.654101 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-v76sx"] Dec 06 09:31:59 crc kubenswrapper[4895]: W1206 09:31:59.657668 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b7e3a78_9b6a_4b72_a42d_7226a4590eaf.slice/crio-ee5e0e8137ba621865c3eec818a5b1ea7bd38bd097e0adbe2de4c1c9394657f9 WatchSource:0}: Error finding container ee5e0e8137ba621865c3eec818a5b1ea7bd38bd097e0adbe2de4c1c9394657f9: Status 404 returned error can't find the container with id ee5e0e8137ba621865c3eec818a5b1ea7bd38bd097e0adbe2de4c1c9394657f9 Dec 06 09:31:59 crc kubenswrapper[4895]: I1206 09:31:59.723732 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" event={"ID":"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf","Type":"ContainerStarted","Data":"ee5e0e8137ba621865c3eec818a5b1ea7bd38bd097e0adbe2de4c1c9394657f9"} Dec 06 09:32:01 crc kubenswrapper[4895]: I1206 09:32:01.746560 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" event={"ID":"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf","Type":"ContainerStarted","Data":"803b60ad756c9cd877c6c17ef309294f08726b4f571c63c6d1b73098feefdc86"} Dec 06 09:32:01 crc kubenswrapper[4895]: I1206 09:32:01.771052 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" podStartSLOduration=3.29770386 podStartE2EDuration="3.771035151s" podCreationTimestamp="2025-12-06 09:31:58 +0000 UTC" firstStartedPulling="2025-12-06 09:31:59.659890066 +0000 UTC m=+9282.061278936" lastFinishedPulling="2025-12-06 09:32:00.133221367 +0000 UTC m=+9282.534610227" observedRunningTime="2025-12-06 09:32:01.767366242 +0000 UTC m=+9284.168755112" watchObservedRunningTime="2025-12-06 09:32:01.771035151 +0000 UTC m=+9284.172424021" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.218177 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-spzl9"] Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.222120 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.247683 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spzl9"] Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.282169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8f6\" (UniqueName: \"kubernetes.io/projected/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-kube-api-access-7n8f6\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.282301 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-catalog-content\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.282405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-utilities\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.384512 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-utilities\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.384810 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8f6\" (UniqueName: \"kubernetes.io/projected/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-kube-api-access-7n8f6\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.384941 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-catalog-content\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.385157 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-utilities\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.385659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-catalog-content\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.405072 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8f6\" (UniqueName: \"kubernetes.io/projected/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-kube-api-access-7n8f6\") pod \"redhat-marketplace-spzl9\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:31 crc kubenswrapper[4895]: I1206 09:32:31.551488 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:32 crc kubenswrapper[4895]: I1206 09:32:32.127947 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spzl9"] Dec 06 09:32:33 crc kubenswrapper[4895]: I1206 09:32:33.097944 4895 generic.go:334] "Generic (PLEG): container finished" podID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerID="ec187226b9114b0bf4da4ef4c7238dedf624485255dae71f371c400f3d57b34c" exitCode=0 Dec 06 09:32:33 crc kubenswrapper[4895]: I1206 09:32:33.098044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spzl9" event={"ID":"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3","Type":"ContainerDied","Data":"ec187226b9114b0bf4da4ef4c7238dedf624485255dae71f371c400f3d57b34c"} Dec 06 09:32:33 crc kubenswrapper[4895]: I1206 09:32:33.098670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spzl9" event={"ID":"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3","Type":"ContainerStarted","Data":"86c3d0d0272e8f09305b96102a2106adff45424a0b2d40c586154474debe7dc5"} Dec 06 09:32:33 crc kubenswrapper[4895]: I1206 09:32:33.101131 4895 generic.go:334] "Generic (PLEG): container finished" podID="9107701d-0557-4444-8997-8c70c8879415" containerID="8a6658bd22f30374041cb8dd2953b3c4966faad7f8a00efe009a44932e7afed4" exitCode=0 Dec 06 09:32:33 crc kubenswrapper[4895]: I1206 09:32:33.101184 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" event={"ID":"9107701d-0557-4444-8997-8c70c8879415","Type":"ContainerDied","Data":"8a6658bd22f30374041cb8dd2953b3c4966faad7f8a00efe009a44932e7afed4"} Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.624561 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.757441 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-inventory\") pod \"9107701d-0557-4444-8997-8c70c8879415\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.757616 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-ssh-key\") pod \"9107701d-0557-4444-8997-8c70c8879415\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.757755 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98hjd\" (UniqueName: \"kubernetes.io/projected/9107701d-0557-4444-8997-8c70c8879415-kube-api-access-98hjd\") pod \"9107701d-0557-4444-8997-8c70c8879415\" (UID: \"9107701d-0557-4444-8997-8c70c8879415\") " Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.763215 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9107701d-0557-4444-8997-8c70c8879415-kube-api-access-98hjd" (OuterVolumeSpecName: "kube-api-access-98hjd") pod "9107701d-0557-4444-8997-8c70c8879415" (UID: "9107701d-0557-4444-8997-8c70c8879415"). InnerVolumeSpecName "kube-api-access-98hjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.787047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-inventory" (OuterVolumeSpecName: "inventory") pod "9107701d-0557-4444-8997-8c70c8879415" (UID: "9107701d-0557-4444-8997-8c70c8879415"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.799579 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9107701d-0557-4444-8997-8c70c8879415" (UID: "9107701d-0557-4444-8997-8c70c8879415"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.860195 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.860228 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9107701d-0557-4444-8997-8c70c8879415-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:34 crc kubenswrapper[4895]: I1206 09:32:34.860240 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98hjd\" (UniqueName: \"kubernetes.io/projected/9107701d-0557-4444-8997-8c70c8879415-kube-api-access-98hjd\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.121190 4895 generic.go:334] "Generic (PLEG): container finished" podID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerID="4ac76dab0483a2635355e7086560d5e09ab48f22f6431d11846b79fe60d7b87e" exitCode=0 Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.121262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spzl9" event={"ID":"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3","Type":"ContainerDied","Data":"4ac76dab0483a2635355e7086560d5e09ab48f22f6431d11846b79fe60d7b87e"} Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.136577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" event={"ID":"9107701d-0557-4444-8997-8c70c8879415","Type":"ContainerDied","Data":"6dddd4e17f3fa4c051a1f2dcb51f3a4de73c915c985ba494356be94980fe157a"} Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.136618 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dddd4e17f3fa4c051a1f2dcb51f3a4de73c915c985ba494356be94980fe157a" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.136674 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-l8jm8" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.234417 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-networker-5sd8x"] Dec 06 09:32:35 crc kubenswrapper[4895]: E1206 09:32:35.234989 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9107701d-0557-4444-8997-8c70c8879415" containerName="configure-os-openstack-openstack-networker" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.235014 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9107701d-0557-4444-8997-8c70c8879415" containerName="configure-os-openstack-openstack-networker" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.235255 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9107701d-0557-4444-8997-8c70c8879415" containerName="configure-os-openstack-openstack-networker" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.236208 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.238923 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.240909 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.247576 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-networker-5sd8x"] Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.268953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-ssh-key\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.269204 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndltn\" (UniqueName: \"kubernetes.io/projected/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-kube-api-access-ndltn\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.269362 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-inventory\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.371705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-ssh-key\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.371774 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndltn\" (UniqueName: \"kubernetes.io/projected/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-kube-api-access-ndltn\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.371882 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-inventory\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.377268 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-inventory\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.377278 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-ssh-key\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.403688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndltn\" (UniqueName: \"kubernetes.io/projected/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-kube-api-access-ndltn\") pod \"run-os-openstack-openstack-networker-5sd8x\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:35 crc kubenswrapper[4895]: I1206 09:32:35.565685 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:36 crc kubenswrapper[4895]: I1206 09:32:36.346296 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-networker-5sd8x"] Dec 06 09:32:37 crc kubenswrapper[4895]: I1206 09:32:37.157002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-5sd8x" event={"ID":"caf73cd5-5852-49e3-b5b0-4330d89bb2e3","Type":"ContainerStarted","Data":"4d6591d6d25f532d1adee8d7cd6c22917f6c6dcd1ffbb10879e440ad451bcb0b"} Dec 06 09:32:37 crc kubenswrapper[4895]: I1206 09:32:37.161368 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spzl9" event={"ID":"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3","Type":"ContainerStarted","Data":"7ea0a5fd73b6e5f03b823233b3adf26b096639a47671108b6d5bd0ba236af814"} Dec 06 09:32:37 crc kubenswrapper[4895]: I1206 09:32:37.187693 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-spzl9" podStartSLOduration=3.73016141 podStartE2EDuration="6.187669548s" podCreationTimestamp="2025-12-06 09:32:31 +0000 UTC" firstStartedPulling="2025-12-06 09:32:33.099937972 +0000 UTC m=+9315.501326852" lastFinishedPulling="2025-12-06 09:32:35.55744611 +0000 UTC m=+9317.958834990" observedRunningTime="2025-12-06 09:32:37.183173435 +0000 UTC m=+9319.584562305" watchObservedRunningTime="2025-12-06 09:32:37.187669548 +0000 UTC m=+9319.589058438" Dec 06 09:32:38 crc kubenswrapper[4895]: I1206 09:32:38.175997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-5sd8x" event={"ID":"caf73cd5-5852-49e3-b5b0-4330d89bb2e3","Type":"ContainerStarted","Data":"9465be4b2fe48d13c48073d507ff887ddc48f8699f664d6b4ed5f202171b9095"} Dec 06 09:32:41 crc kubenswrapper[4895]: I1206 09:32:41.552123 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:41 crc kubenswrapper[4895]: I1206 09:32:41.552749 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:41 crc kubenswrapper[4895]: I1206 09:32:41.620762 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:41 crc kubenswrapper[4895]: I1206 09:32:41.657487 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-networker-5sd8x" podStartSLOduration=6.09734765 podStartE2EDuration="6.657457952s" podCreationTimestamp="2025-12-06 09:32:35 +0000 UTC" firstStartedPulling="2025-12-06 09:32:36.352186353 +0000 UTC m=+9318.753575233" lastFinishedPulling="2025-12-06 09:32:36.912296665 +0000 UTC m=+9319.313685535" observedRunningTime="2025-12-06 09:32:38.203665545 +0000 UTC m=+9320.605054415" watchObservedRunningTime="2025-12-06 09:32:41.657457952 +0000 UTC m=+9324.058846822" Dec 06 09:32:42 crc kubenswrapper[4895]: I1206 09:32:42.262140 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:42 crc kubenswrapper[4895]: I1206 09:32:42.315927 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spzl9"] Dec 06 09:32:44 crc kubenswrapper[4895]: I1206 09:32:44.246242 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-spzl9" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="registry-server" containerID="cri-o://7ea0a5fd73b6e5f03b823233b3adf26b096639a47671108b6d5bd0ba236af814" gracePeriod=2 Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.264208 4895 generic.go:334] "Generic (PLEG): container finished" podID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerID="7ea0a5fd73b6e5f03b823233b3adf26b096639a47671108b6d5bd0ba236af814" exitCode=0 Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.264724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spzl9" event={"ID":"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3","Type":"ContainerDied","Data":"7ea0a5fd73b6e5f03b823233b3adf26b096639a47671108b6d5bd0ba236af814"} Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.508816 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.605691 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-catalog-content\") pod \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.605765 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n8f6\" (UniqueName: \"kubernetes.io/projected/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-kube-api-access-7n8f6\") pod \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.605892 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-utilities\") pod \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\" (UID: \"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3\") " Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.606807 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-utilities" (OuterVolumeSpecName: "utilities") pod "231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" (UID: "231d1dd1-a185-4ecf-85e7-33db1dc5d2d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.612497 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-kube-api-access-7n8f6" (OuterVolumeSpecName: "kube-api-access-7n8f6") pod "231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" (UID: "231d1dd1-a185-4ecf-85e7-33db1dc5d2d3"). InnerVolumeSpecName "kube-api-access-7n8f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.626071 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" (UID: "231d1dd1-a185-4ecf-85e7-33db1dc5d2d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.709274 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.709313 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n8f6\" (UniqueName: \"kubernetes.io/projected/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-kube-api-access-7n8f6\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:45 crc kubenswrapper[4895]: I1206 09:32:45.709329 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.277515 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spzl9" event={"ID":"231d1dd1-a185-4ecf-85e7-33db1dc5d2d3","Type":"ContainerDied","Data":"86c3d0d0272e8f09305b96102a2106adff45424a0b2d40c586154474debe7dc5"} Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.278075 4895 scope.go:117] "RemoveContainer" containerID="7ea0a5fd73b6e5f03b823233b3adf26b096639a47671108b6d5bd0ba236af814" Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.277581 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spzl9" Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.308163 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spzl9"] Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.308576 4895 scope.go:117] "RemoveContainer" containerID="4ac76dab0483a2635355e7086560d5e09ab48f22f6431d11846b79fe60d7b87e" Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.331110 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-spzl9"] Dec 06 09:32:46 crc kubenswrapper[4895]: I1206 09:32:46.612442 4895 scope.go:117] "RemoveContainer" containerID="ec187226b9114b0bf4da4ef4c7238dedf624485255dae71f371c400f3d57b34c" Dec 06 09:32:48 crc kubenswrapper[4895]: I1206 09:32:48.064770 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" path="/var/lib/kubelet/pods/231d1dd1-a185-4ecf-85e7-33db1dc5d2d3/volumes" Dec 06 09:32:48 crc kubenswrapper[4895]: I1206 09:32:48.300450 4895 generic.go:334] "Generic (PLEG): container finished" podID="caf73cd5-5852-49e3-b5b0-4330d89bb2e3" containerID="9465be4b2fe48d13c48073d507ff887ddc48f8699f664d6b4ed5f202171b9095" exitCode=0 Dec 06 09:32:48 crc kubenswrapper[4895]: I1206 09:32:48.300511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-5sd8x" event={"ID":"caf73cd5-5852-49e3-b5b0-4330d89bb2e3","Type":"ContainerDied","Data":"9465be4b2fe48d13c48073d507ff887ddc48f8699f664d6b4ed5f202171b9095"} Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.760075 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.795252 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-inventory\") pod \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.795405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndltn\" (UniqueName: \"kubernetes.io/projected/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-kube-api-access-ndltn\") pod \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.795562 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-ssh-key\") pod \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\" (UID: \"caf73cd5-5852-49e3-b5b0-4330d89bb2e3\") " Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.803724 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-kube-api-access-ndltn" (OuterVolumeSpecName: "kube-api-access-ndltn") pod "caf73cd5-5852-49e3-b5b0-4330d89bb2e3" (UID: "caf73cd5-5852-49e3-b5b0-4330d89bb2e3"). InnerVolumeSpecName "kube-api-access-ndltn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.825389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-inventory" (OuterVolumeSpecName: "inventory") pod "caf73cd5-5852-49e3-b5b0-4330d89bb2e3" (UID: "caf73cd5-5852-49e3-b5b0-4330d89bb2e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.827125 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "caf73cd5-5852-49e3-b5b0-4330d89bb2e3" (UID: "caf73cd5-5852-49e3-b5b0-4330d89bb2e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.898162 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.898202 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndltn\" (UniqueName: \"kubernetes.io/projected/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-kube-api-access-ndltn\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:49 crc kubenswrapper[4895]: I1206 09:32:49.898213 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf73cd5-5852-49e3-b5b0-4330d89bb2e3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.321949 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-5sd8x" event={"ID":"caf73cd5-5852-49e3-b5b0-4330d89bb2e3","Type":"ContainerDied","Data":"4d6591d6d25f532d1adee8d7cd6c22917f6c6dcd1ffbb10879e440ad451bcb0b"} Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.321998 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-5sd8x" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.322032 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d6591d6d25f532d1adee8d7cd6c22917f6c6dcd1ffbb10879e440ad451bcb0b" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.412422 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-qwwl7"] Dec 06 09:32:50 crc kubenswrapper[4895]: E1206 09:32:50.412964 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="registry-server" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.412983 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="registry-server" Dec 06 09:32:50 crc kubenswrapper[4895]: E1206 09:32:50.413006 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="extract-utilities" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.413014 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="extract-utilities" Dec 06 09:32:50 crc kubenswrapper[4895]: E1206 09:32:50.413062 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf73cd5-5852-49e3-b5b0-4330d89bb2e3" containerName="run-os-openstack-openstack-networker" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.413071 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf73cd5-5852-49e3-b5b0-4330d89bb2e3" containerName="run-os-openstack-openstack-networker" Dec 06 09:32:50 crc kubenswrapper[4895]: E1206 09:32:50.413091 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="extract-content" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.413100 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="extract-content" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.413296 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf73cd5-5852-49e3-b5b0-4330d89bb2e3" containerName="run-os-openstack-openstack-networker" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.413320 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="231d1dd1-a185-4ecf-85e7-33db1dc5d2d3" containerName="registry-server" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.414143 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.416745 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.422118 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.424805 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-qwwl7"] Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.531641 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqt47\" (UniqueName: \"kubernetes.io/projected/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-kube-api-access-jqt47\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.531702 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-inventory\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.532262 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-ssh-key\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.634080 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-ssh-key\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.634177 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqt47\" (UniqueName: \"kubernetes.io/projected/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-kube-api-access-jqt47\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.634201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-inventory\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.640101 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-inventory\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.640539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-ssh-key\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.653033 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqt47\" (UniqueName: \"kubernetes.io/projected/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-kube-api-access-jqt47\") pod \"reboot-os-openstack-openstack-networker-qwwl7\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:50 crc kubenswrapper[4895]: I1206 09:32:50.732605 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:32:51 crc kubenswrapper[4895]: I1206 09:32:51.954851 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-qwwl7"] Dec 06 09:32:52 crc kubenswrapper[4895]: E1206 09:32:52.276804 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b7e3a78_9b6a_4b72_a42d_7226a4590eaf.slice/crio-conmon-803b60ad756c9cd877c6c17ef309294f08726b4f571c63c6d1b73098feefdc86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b7e3a78_9b6a_4b72_a42d_7226a4590eaf.slice/crio-803b60ad756c9cd877c6c17ef309294f08726b4f571c63c6d1b73098feefdc86.scope\": RecentStats: unable to find data in memory cache]" Dec 06 09:32:52 crc kubenswrapper[4895]: I1206 09:32:52.339316 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" containerID="803b60ad756c9cd877c6c17ef309294f08726b4f571c63c6d1b73098feefdc86" exitCode=0 Dec 06 09:32:52 crc kubenswrapper[4895]: I1206 09:32:52.339405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" event={"ID":"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf","Type":"ContainerDied","Data":"803b60ad756c9cd877c6c17ef309294f08726b4f571c63c6d1b73098feefdc86"} Dec 06 09:32:52 crc kubenswrapper[4895]: I1206 09:32:52.341344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" event={"ID":"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5","Type":"ContainerStarted","Data":"54705623ae65f953bcc06532b230c6eaf17b7ae93cda49e29ca9d657954e4df4"} Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.350787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" event={"ID":"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5","Type":"ContainerStarted","Data":"d065560c7d379ae167fc7f76294518ad6476ef9553dd340d3022cc17689466c1"} Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.387089 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" podStartSLOduration=2.711066554 podStartE2EDuration="3.387065702s" podCreationTimestamp="2025-12-06 09:32:50 +0000 UTC" firstStartedPulling="2025-12-06 09:32:51.972212149 +0000 UTC m=+9334.373601039" lastFinishedPulling="2025-12-06 09:32:52.648211317 +0000 UTC m=+9335.049600187" observedRunningTime="2025-12-06 09:32:53.364103155 +0000 UTC m=+9335.765492025" watchObservedRunningTime="2025-12-06 09:32:53.387065702 +0000 UTC m=+9335.788454582" Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.820920 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.899850 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-inventory\") pod \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.899928 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ceph\") pod \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.899985 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fjc7\" (UniqueName: \"kubernetes.io/projected/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-kube-api-access-8fjc7\") pod \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.900099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ssh-key\") pod \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\" (UID: \"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf\") " Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.905284 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ceph" (OuterVolumeSpecName: "ceph") pod "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" (UID: "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.911708 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-kube-api-access-8fjc7" (OuterVolumeSpecName: "kube-api-access-8fjc7") pod "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" (UID: "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf"). InnerVolumeSpecName "kube-api-access-8fjc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.938392 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-inventory" (OuterVolumeSpecName: "inventory") pod "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" (UID: "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:53 crc kubenswrapper[4895]: I1206 09:32:53.943689 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" (UID: "5b7e3a78-9b6a-4b72-a42d-7226a4590eaf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.002897 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.003217 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.003227 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fjc7\" (UniqueName: \"kubernetes.io/projected/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-kube-api-access-8fjc7\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.003237 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b7e3a78-9b6a-4b72-a42d-7226a4590eaf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.365066 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.365869 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-v76sx" event={"ID":"5b7e3a78-9b6a-4b72-a42d-7226a4590eaf","Type":"ContainerDied","Data":"ee5e0e8137ba621865c3eec818a5b1ea7bd38bd097e0adbe2de4c1c9394657f9"} Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.365895 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5e0e8137ba621865c3eec818a5b1ea7bd38bd097e0adbe2de4c1c9394657f9" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.435409 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-dqx6d"] Dec 06 09:32:54 crc kubenswrapper[4895]: E1206 09:32:54.435843 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" containerName="configure-os-openstack-openstack-cell1" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.435861 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" containerName="configure-os-openstack-openstack-cell1" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.436073 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7e3a78-9b6a-4b72-a42d-7226a4590eaf" containerName="configure-os-openstack-openstack-cell1" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.436848 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.441741 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.442504 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.451218 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-dqx6d"] Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.511274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-1\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.511408 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ceph\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.511526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.511571 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.511612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-0\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.511646 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kqb7\" (UniqueName: \"kubernetes.io/projected/6bbe868c-9365-4fea-bb01-264ae7f6e04a-kube-api-access-9kqb7\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.612581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-1\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.612724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ceph\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.612811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.612849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.612879 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-0\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.612910 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kqb7\" (UniqueName: \"kubernetes.io/projected/6bbe868c-9365-4fea-bb01-264ae7f6e04a-kube-api-access-9kqb7\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.993769 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.993806 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.993815 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-1\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.993890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ceph\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.995038 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kqb7\" (UniqueName: \"kubernetes.io/projected/6bbe868c-9365-4fea-bb01-264ae7f6e04a-kube-api-access-9kqb7\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:54 crc kubenswrapper[4895]: I1206 09:32:54.995428 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-0\") pod \"ssh-known-hosts-openstack-dqx6d\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:55 crc kubenswrapper[4895]: I1206 09:32:55.056700 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:32:55 crc kubenswrapper[4895]: W1206 09:32:55.681170 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bbe868c_9365_4fea_bb01_264ae7f6e04a.slice/crio-beeb11728f5ea2e3a86ca83b8d3eb1902ce959d3fac0d647311ab577134f6e98 WatchSource:0}: Error finding container beeb11728f5ea2e3a86ca83b8d3eb1902ce959d3fac0d647311ab577134f6e98: Status 404 returned error can't find the container with id beeb11728f5ea2e3a86ca83b8d3eb1902ce959d3fac0d647311ab577134f6e98 Dec 06 09:32:55 crc kubenswrapper[4895]: I1206 09:32:55.684147 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-dqx6d"] Dec 06 09:32:56 crc kubenswrapper[4895]: I1206 09:32:56.382930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dqx6d" event={"ID":"6bbe868c-9365-4fea-bb01-264ae7f6e04a","Type":"ContainerStarted","Data":"beeb11728f5ea2e3a86ca83b8d3eb1902ce959d3fac0d647311ab577134f6e98"} Dec 06 09:32:57 crc kubenswrapper[4895]: I1206 09:32:57.393514 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dqx6d" event={"ID":"6bbe868c-9365-4fea-bb01-264ae7f6e04a","Type":"ContainerStarted","Data":"3537aefde7f88ada45ca1e96af68f1cec4b356fa11062d7fdab80e9fe9706bf9"} Dec 06 09:32:57 crc kubenswrapper[4895]: I1206 09:32:57.416425 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-dqx6d" podStartSLOduration=3.000269495 podStartE2EDuration="3.416403953s" podCreationTimestamp="2025-12-06 09:32:54 +0000 UTC" firstStartedPulling="2025-12-06 09:32:55.684160579 +0000 UTC m=+9338.085549449" lastFinishedPulling="2025-12-06 09:32:56.100295037 +0000 UTC m=+9338.501683907" observedRunningTime="2025-12-06 09:32:57.409606737 +0000 UTC m=+9339.810995627" watchObservedRunningTime="2025-12-06 09:32:57.416403953 +0000 UTC m=+9339.817792823" Dec 06 09:33:10 crc kubenswrapper[4895]: I1206 09:33:10.529567 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" containerID="d065560c7d379ae167fc7f76294518ad6476ef9553dd340d3022cc17689466c1" exitCode=0 Dec 06 09:33:10 crc kubenswrapper[4895]: I1206 09:33:10.529646 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" event={"ID":"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5","Type":"ContainerDied","Data":"d065560c7d379ae167fc7f76294518ad6476ef9553dd340d3022cc17689466c1"} Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.009752 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.204016 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-inventory\") pod \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.204400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqt47\" (UniqueName: \"kubernetes.io/projected/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-kube-api-access-jqt47\") pod \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.204513 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-ssh-key\") pod \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\" (UID: \"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5\") " Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.210856 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-kube-api-access-jqt47" (OuterVolumeSpecName: "kube-api-access-jqt47") pod "0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" (UID: "0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5"). InnerVolumeSpecName "kube-api-access-jqt47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.233638 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" (UID: "0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.245365 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-inventory" (OuterVolumeSpecName: "inventory") pod "0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" (UID: "0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.307125 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.307164 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqt47\" (UniqueName: \"kubernetes.io/projected/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-kube-api-access-jqt47\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.307175 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.548146 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" event={"ID":"0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5","Type":"ContainerDied","Data":"54705623ae65f953bcc06532b230c6eaf17b7ae93cda49e29ca9d657954e4df4"} Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.548448 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54705623ae65f953bcc06532b230c6eaf17b7ae93cda49e29ca9d657954e4df4" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.548197 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-qwwl7" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.625974 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-networker-64l8l"] Dec 06 09:33:12 crc kubenswrapper[4895]: E1206 09:33:12.626728 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" containerName="reboot-os-openstack-openstack-networker" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.626759 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" containerName="reboot-os-openstack-openstack-networker" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.627264 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5" containerName="reboot-os-openstack-openstack-networker" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.628266 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.630955 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.669511 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-networker-64l8l"] Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.814997 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.815153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ssh-key\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.815195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvbs\" (UniqueName: \"kubernetes.io/projected/67c7ea3d-195b-4c75-9f33-ff6bc417147f-kube-api-access-5dvbs\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.815275 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-inventory\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.815316 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.815382 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.916910 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-inventory\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.916964 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.917028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.917117 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.917217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ssh-key\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.917244 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvbs\" (UniqueName: \"kubernetes.io/projected/67c7ea3d-195b-4c75-9f33-ff6bc417147f-kube-api-access-5dvbs\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.922757 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.922905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.923950 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ssh-key\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.930357 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.931371 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-inventory\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.934367 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvbs\" (UniqueName: \"kubernetes.io/projected/67c7ea3d-195b-4c75-9f33-ff6bc417147f-kube-api-access-5dvbs\") pod \"install-certs-openstack-openstack-networker-64l8l\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:12 crc kubenswrapper[4895]: I1206 09:33:12.948090 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:13 crc kubenswrapper[4895]: I1206 09:33:13.477269 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-networker-64l8l"] Dec 06 09:33:13 crc kubenswrapper[4895]: I1206 09:33:13.558390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-64l8l" event={"ID":"67c7ea3d-195b-4c75-9f33-ff6bc417147f","Type":"ContainerStarted","Data":"ab7ccd954a5184787e48159b0aa6b65ea543482e400d4525fa59962dfc81a766"} Dec 06 09:33:14 crc kubenswrapper[4895]: I1206 09:33:14.593636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-64l8l" event={"ID":"67c7ea3d-195b-4c75-9f33-ff6bc417147f","Type":"ContainerStarted","Data":"7fba56b235fd5e3806d9112551c93e5eb5a2edfed58c6bc87e642f073191e825"} Dec 06 09:33:14 crc kubenswrapper[4895]: I1206 09:33:14.599054 4895 generic.go:334] "Generic (PLEG): container finished" podID="6bbe868c-9365-4fea-bb01-264ae7f6e04a" containerID="3537aefde7f88ada45ca1e96af68f1cec4b356fa11062d7fdab80e9fe9706bf9" exitCode=0 Dec 06 09:33:14 crc kubenswrapper[4895]: I1206 09:33:14.599102 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dqx6d" event={"ID":"6bbe868c-9365-4fea-bb01-264ae7f6e04a","Type":"ContainerDied","Data":"3537aefde7f88ada45ca1e96af68f1cec4b356fa11062d7fdab80e9fe9706bf9"} Dec 06 09:33:14 crc kubenswrapper[4895]: I1206 09:33:14.618449 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-networker-64l8l" podStartSLOduration=2.075043766 podStartE2EDuration="2.618428001s" podCreationTimestamp="2025-12-06 09:33:12 +0000 UTC" firstStartedPulling="2025-12-06 09:33:13.480116262 +0000 UTC m=+9355.881505132" lastFinishedPulling="2025-12-06 09:33:14.023500497 +0000 UTC m=+9356.424889367" observedRunningTime="2025-12-06 09:33:14.618056021 +0000 UTC m=+9357.019444901" watchObservedRunningTime="2025-12-06 09:33:14.618428001 +0000 UTC m=+9357.019816891" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.202966 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.302795 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ceph\") pod \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.302918 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-cell1\") pod \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.302983 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-1\") pod \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.303033 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-0\") pod \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.303065 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-networker\") pod \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.303138 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kqb7\" (UniqueName: \"kubernetes.io/projected/6bbe868c-9365-4fea-bb01-264ae7f6e04a-kube-api-access-9kqb7\") pod \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\" (UID: \"6bbe868c-9365-4fea-bb01-264ae7f6e04a\") " Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.308295 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ceph" (OuterVolumeSpecName: "ceph") pod "6bbe868c-9365-4fea-bb01-264ae7f6e04a" (UID: "6bbe868c-9365-4fea-bb01-264ae7f6e04a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.308663 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbe868c-9365-4fea-bb01-264ae7f6e04a-kube-api-access-9kqb7" (OuterVolumeSpecName: "kube-api-access-9kqb7") pod "6bbe868c-9365-4fea-bb01-264ae7f6e04a" (UID: "6bbe868c-9365-4fea-bb01-264ae7f6e04a"). InnerVolumeSpecName "kube-api-access-9kqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.330576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6bbe868c-9365-4fea-bb01-264ae7f6e04a" (UID: "6bbe868c-9365-4fea-bb01-264ae7f6e04a"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.336304 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-1" (OuterVolumeSpecName: "inventory-1") pod "6bbe868c-9365-4fea-bb01-264ae7f6e04a" (UID: "6bbe868c-9365-4fea-bb01-264ae7f6e04a"). InnerVolumeSpecName "inventory-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.344786 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-networker" (OuterVolumeSpecName: "ssh-key-openstack-networker") pod "6bbe868c-9365-4fea-bb01-264ae7f6e04a" (UID: "6bbe868c-9365-4fea-bb01-264ae7f6e04a"). InnerVolumeSpecName "ssh-key-openstack-networker". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.354449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6bbe868c-9365-4fea-bb01-264ae7f6e04a" (UID: "6bbe868c-9365-4fea-bb01-264ae7f6e04a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.405320 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.405355 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.405366 4895 reconciler_common.go:293] "Volume detached for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.405378 4895 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.405387 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/6bbe868c-9365-4fea-bb01-264ae7f6e04a-ssh-key-openstack-networker\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.405395 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kqb7\" (UniqueName: \"kubernetes.io/projected/6bbe868c-9365-4fea-bb01-264ae7f6e04a-kube-api-access-9kqb7\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.618055 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dqx6d" event={"ID":"6bbe868c-9365-4fea-bb01-264ae7f6e04a","Type":"ContainerDied","Data":"beeb11728f5ea2e3a86ca83b8d3eb1902ce959d3fac0d647311ab577134f6e98"} Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.618138 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beeb11728f5ea2e3a86ca83b8d3eb1902ce959d3fac0d647311ab577134f6e98" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.618259 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dqx6d" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.739509 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-5mxxx"] Dec 06 09:33:16 crc kubenswrapper[4895]: E1206 09:33:16.739922 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbe868c-9365-4fea-bb01-264ae7f6e04a" containerName="ssh-known-hosts-openstack" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.739937 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbe868c-9365-4fea-bb01-264ae7f6e04a" containerName="ssh-known-hosts-openstack" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.740145 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bbe868c-9365-4fea-bb01-264ae7f6e04a" containerName="ssh-known-hosts-openstack" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.740838 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.749083 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-5mxxx"] Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.750254 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.750305 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.915069 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7k96\" (UniqueName: \"kubernetes.io/projected/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-kube-api-access-r7k96\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.915123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ssh-key\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.915288 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ceph\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:16 crc kubenswrapper[4895]: I1206 09:33:16.915354 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-inventory\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.017580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-inventory\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.017720 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7k96\" (UniqueName: \"kubernetes.io/projected/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-kube-api-access-r7k96\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.017744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ssh-key\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.018162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ceph\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.021089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ceph\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.021089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-inventory\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.021405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ssh-key\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.041404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7k96\" (UniqueName: \"kubernetes.io/projected/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-kube-api-access-r7k96\") pod \"run-os-openstack-openstack-cell1-5mxxx\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.067571 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:17 crc kubenswrapper[4895]: I1206 09:33:17.606068 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-5mxxx"] Dec 06 09:33:18 crc kubenswrapper[4895]: I1206 09:33:18.641249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" event={"ID":"7c2ff2d2-786f-4187-b35f-ba09d517cbf8","Type":"ContainerStarted","Data":"67ecf7b97fa686cbd694c2b45be585bc37088957305ca94a44214e24cddba267"} Dec 06 09:33:18 crc kubenswrapper[4895]: I1206 09:33:18.641580 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" event={"ID":"7c2ff2d2-786f-4187-b35f-ba09d517cbf8","Type":"ContainerStarted","Data":"eb9460dd38e74561a87138ddebc49f2237622e9e45c888585e2a8111ef093aff"} Dec 06 09:33:18 crc kubenswrapper[4895]: I1206 09:33:18.659623 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" podStartSLOduration=2.27728304 podStartE2EDuration="2.659604145s" podCreationTimestamp="2025-12-06 09:33:16 +0000 UTC" firstStartedPulling="2025-12-06 09:33:17.801546182 +0000 UTC m=+9360.202935042" lastFinishedPulling="2025-12-06 09:33:18.183867267 +0000 UTC m=+9360.585256147" observedRunningTime="2025-12-06 09:33:18.658771372 +0000 UTC m=+9361.060160262" watchObservedRunningTime="2025-12-06 09:33:18.659604145 +0000 UTC m=+9361.060993015" Dec 06 09:33:25 crc kubenswrapper[4895]: I1206 09:33:25.714225 4895 generic.go:334] "Generic (PLEG): container finished" podID="67c7ea3d-195b-4c75-9f33-ff6bc417147f" containerID="7fba56b235fd5e3806d9112551c93e5eb5a2edfed58c6bc87e642f073191e825" exitCode=0 Dec 06 09:33:25 crc kubenswrapper[4895]: I1206 09:33:25.714323 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-64l8l" event={"ID":"67c7ea3d-195b-4c75-9f33-ff6bc417147f","Type":"ContainerDied","Data":"7fba56b235fd5e3806d9112551c93e5eb5a2edfed58c6bc87e642f073191e825"} Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.210199 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.348031 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-bootstrap-combined-ca-bundle\") pod \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.348103 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-neutron-metadata-combined-ca-bundle\") pod \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.348201 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ssh-key\") pod \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.348262 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ovn-combined-ca-bundle\") pod \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.348332 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-inventory\") pod \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.348370 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dvbs\" (UniqueName: \"kubernetes.io/projected/67c7ea3d-195b-4c75-9f33-ff6bc417147f-kube-api-access-5dvbs\") pod \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\" (UID: \"67c7ea3d-195b-4c75-9f33-ff6bc417147f\") " Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.354622 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "67c7ea3d-195b-4c75-9f33-ff6bc417147f" (UID: "67c7ea3d-195b-4c75-9f33-ff6bc417147f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.355126 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c7ea3d-195b-4c75-9f33-ff6bc417147f-kube-api-access-5dvbs" (OuterVolumeSpecName: "kube-api-access-5dvbs") pod "67c7ea3d-195b-4c75-9f33-ff6bc417147f" (UID: "67c7ea3d-195b-4c75-9f33-ff6bc417147f"). InnerVolumeSpecName "kube-api-access-5dvbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.355270 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "67c7ea3d-195b-4c75-9f33-ff6bc417147f" (UID: "67c7ea3d-195b-4c75-9f33-ff6bc417147f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.355521 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "67c7ea3d-195b-4c75-9f33-ff6bc417147f" (UID: "67c7ea3d-195b-4c75-9f33-ff6bc417147f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.381336 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67c7ea3d-195b-4c75-9f33-ff6bc417147f" (UID: "67c7ea3d-195b-4c75-9f33-ff6bc417147f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.403076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-inventory" (OuterVolumeSpecName: "inventory") pod "67c7ea3d-195b-4c75-9f33-ff6bc417147f" (UID: "67c7ea3d-195b-4c75-9f33-ff6bc417147f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.451879 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.451986 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.451998 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dvbs\" (UniqueName: \"kubernetes.io/projected/67c7ea3d-195b-4c75-9f33-ff6bc417147f-kube-api-access-5dvbs\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.452008 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.452019 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.452032 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c7ea3d-195b-4c75-9f33-ff6bc417147f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.740186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-64l8l" event={"ID":"67c7ea3d-195b-4c75-9f33-ff6bc417147f","Type":"ContainerDied","Data":"ab7ccd954a5184787e48159b0aa6b65ea543482e400d4525fa59962dfc81a766"} Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.740255 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-64l8l" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.740268 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab7ccd954a5184787e48159b0aa6b65ea543482e400d4525fa59962dfc81a766" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.825078 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-networker-8gtcz"] Dec 06 09:33:27 crc kubenswrapper[4895]: E1206 09:33:27.826947 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7ea3d-195b-4c75-9f33-ff6bc417147f" containerName="install-certs-openstack-openstack-networker" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.826970 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7ea3d-195b-4c75-9f33-ff6bc417147f" containerName="install-certs-openstack-openstack-networker" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.827226 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c7ea3d-195b-4c75-9f33-ff6bc417147f" containerName="install-certs-openstack-openstack-networker" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.828452 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.830922 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.836007 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.836143 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.858988 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-networker-8gtcz"] Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.961784 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.961882 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac1d9533-8ad3-478c-a262-37339034bc89-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.961918 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-inventory\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.962147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w75cf\" (UniqueName: \"kubernetes.io/projected/ac1d9533-8ad3-478c-a262-37339034bc89-kube-api-access-w75cf\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:27 crc kubenswrapper[4895]: I1206 09:33:27.962233 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ssh-key\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.063732 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.063797 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac1d9533-8ad3-478c-a262-37339034bc89-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.063847 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-inventory\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.063918 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w75cf\" (UniqueName: \"kubernetes.io/projected/ac1d9533-8ad3-478c-a262-37339034bc89-kube-api-access-w75cf\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.063950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ssh-key\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.065128 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac1d9533-8ad3-478c-a262-37339034bc89-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.070433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-inventory\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.070797 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.075630 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ssh-key\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.084130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w75cf\" (UniqueName: \"kubernetes.io/projected/ac1d9533-8ad3-478c-a262-37339034bc89-kube-api-access-w75cf\") pod \"ovn-openstack-openstack-networker-8gtcz\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.152884 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.770619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" event={"ID":"7c2ff2d2-786f-4187-b35f-ba09d517cbf8","Type":"ContainerDied","Data":"67ecf7b97fa686cbd694c2b45be585bc37088957305ca94a44214e24cddba267"} Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.770532 4895 generic.go:334] "Generic (PLEG): container finished" podID="7c2ff2d2-786f-4187-b35f-ba09d517cbf8" containerID="67ecf7b97fa686cbd694c2b45be585bc37088957305ca94a44214e24cddba267" exitCode=0 Dec 06 09:33:28 crc kubenswrapper[4895]: I1206 09:33:28.787278 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-networker-8gtcz"] Dec 06 09:33:29 crc kubenswrapper[4895]: I1206 09:33:29.695870 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:33:29 crc kubenswrapper[4895]: I1206 09:33:29.696295 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:33:29 crc kubenswrapper[4895]: I1206 09:33:29.788438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-8gtcz" event={"ID":"ac1d9533-8ad3-478c-a262-37339034bc89","Type":"ContainerStarted","Data":"bd6481906a97323ec77d0534707e69113a2ed920a09aedda8d69abb32af4cd27"} Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.265335 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.317632 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7k96\" (UniqueName: \"kubernetes.io/projected/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-kube-api-access-r7k96\") pod \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.317744 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ceph\") pod \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.317979 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-inventory\") pod \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.318074 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ssh-key\") pod \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\" (UID: \"7c2ff2d2-786f-4187-b35f-ba09d517cbf8\") " Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.323412 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ceph" (OuterVolumeSpecName: "ceph") pod "7c2ff2d2-786f-4187-b35f-ba09d517cbf8" (UID: "7c2ff2d2-786f-4187-b35f-ba09d517cbf8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.323514 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-kube-api-access-r7k96" (OuterVolumeSpecName: "kube-api-access-r7k96") pod "7c2ff2d2-786f-4187-b35f-ba09d517cbf8" (UID: "7c2ff2d2-786f-4187-b35f-ba09d517cbf8"). InnerVolumeSpecName "kube-api-access-r7k96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.348800 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-inventory" (OuterVolumeSpecName: "inventory") pod "7c2ff2d2-786f-4187-b35f-ba09d517cbf8" (UID: "7c2ff2d2-786f-4187-b35f-ba09d517cbf8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.350193 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7c2ff2d2-786f-4187-b35f-ba09d517cbf8" (UID: "7c2ff2d2-786f-4187-b35f-ba09d517cbf8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.420694 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7k96\" (UniqueName: \"kubernetes.io/projected/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-kube-api-access-r7k96\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.420724 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.420736 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.420746 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c2ff2d2-786f-4187-b35f-ba09d517cbf8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.805139 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-8gtcz" event={"ID":"ac1d9533-8ad3-478c-a262-37339034bc89","Type":"ContainerStarted","Data":"e050ae0d14a20effb692dd3b0d2f63da71107fb6b33a28472cd51dd93c70c7bf"} Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.809395 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" event={"ID":"7c2ff2d2-786f-4187-b35f-ba09d517cbf8","Type":"ContainerDied","Data":"eb9460dd38e74561a87138ddebc49f2237622e9e45c888585e2a8111ef093aff"} Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.809437 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9460dd38e74561a87138ddebc49f2237622e9e45c888585e2a8111ef093aff" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.809517 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-5mxxx" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.853793 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-networker-8gtcz" podStartSLOduration=3.4463373649999998 podStartE2EDuration="3.853763516s" podCreationTimestamp="2025-12-06 09:33:27 +0000 UTC" firstStartedPulling="2025-12-06 09:33:29.100806796 +0000 UTC m=+9371.502195676" lastFinishedPulling="2025-12-06 09:33:29.508232947 +0000 UTC m=+9371.909621827" observedRunningTime="2025-12-06 09:33:30.838048227 +0000 UTC m=+9373.239437137" watchObservedRunningTime="2025-12-06 09:33:30.853763516 +0000 UTC m=+9373.255152406" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.881048 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-npp4n"] Dec 06 09:33:30 crc kubenswrapper[4895]: E1206 09:33:30.881688 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2ff2d2-786f-4187-b35f-ba09d517cbf8" containerName="run-os-openstack-openstack-cell1" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.881713 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2ff2d2-786f-4187-b35f-ba09d517cbf8" containerName="run-os-openstack-openstack-cell1" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.881960 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2ff2d2-786f-4187-b35f-ba09d517cbf8" containerName="run-os-openstack-openstack-cell1" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.882821 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.886245 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.887491 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:33:30 crc kubenswrapper[4895]: I1206 09:33:30.901367 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-npp4n"] Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.036855 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-inventory\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.036935 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.036981 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5f78\" (UniqueName: \"kubernetes.io/projected/7493d5f0-e00c-4ead-ac85-02955a68017b-kube-api-access-j5f78\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.037121 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ceph\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.138689 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-inventory\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.138753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.138787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5f78\" (UniqueName: \"kubernetes.io/projected/7493d5f0-e00c-4ead-ac85-02955a68017b-kube-api-access-j5f78\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.138833 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ceph\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.145336 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-inventory\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.145643 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.145857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ceph\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.158314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5f78\" (UniqueName: \"kubernetes.io/projected/7493d5f0-e00c-4ead-ac85-02955a68017b-kube-api-access-j5f78\") pod \"reboot-os-openstack-openstack-cell1-npp4n\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.267198 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:31 crc kubenswrapper[4895]: I1206 09:33:31.893975 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-npp4n"] Dec 06 09:33:31 crc kubenswrapper[4895]: W1206 09:33:31.898140 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7493d5f0_e00c_4ead_ac85_02955a68017b.slice/crio-4adcaeb3e018719cab555f7a28d2ece007b2ca5f96009bdc9af88bdee4c79545 WatchSource:0}: Error finding container 4adcaeb3e018719cab555f7a28d2ece007b2ca5f96009bdc9af88bdee4c79545: Status 404 returned error can't find the container with id 4adcaeb3e018719cab555f7a28d2ece007b2ca5f96009bdc9af88bdee4c79545 Dec 06 09:33:32 crc kubenswrapper[4895]: I1206 09:33:32.833891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" event={"ID":"7493d5f0-e00c-4ead-ac85-02955a68017b","Type":"ContainerStarted","Data":"7046021a92560591cb74466df0fa7759944b52f4db1369f6d6c9bd8c70dfd919"} Dec 06 09:33:32 crc kubenswrapper[4895]: I1206 09:33:32.833950 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" event={"ID":"7493d5f0-e00c-4ead-ac85-02955a68017b","Type":"ContainerStarted","Data":"4adcaeb3e018719cab555f7a28d2ece007b2ca5f96009bdc9af88bdee4c79545"} Dec 06 09:33:32 crc kubenswrapper[4895]: I1206 09:33:32.854952 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" podStartSLOduration=2.460217875 podStartE2EDuration="2.854931929s" podCreationTimestamp="2025-12-06 09:33:30 +0000 UTC" firstStartedPulling="2025-12-06 09:33:31.901440089 +0000 UTC m=+9374.302828959" lastFinishedPulling="2025-12-06 09:33:32.296154143 +0000 UTC m=+9374.697543013" observedRunningTime="2025-12-06 09:33:32.852205094 +0000 UTC m=+9375.253593964" watchObservedRunningTime="2025-12-06 09:33:32.854931929 +0000 UTC m=+9375.256320799" Dec 06 09:33:47 crc kubenswrapper[4895]: I1206 09:33:47.994321 4895 generic.go:334] "Generic (PLEG): container finished" podID="7493d5f0-e00c-4ead-ac85-02955a68017b" containerID="7046021a92560591cb74466df0fa7759944b52f4db1369f6d6c9bd8c70dfd919" exitCode=0 Dec 06 09:33:47 crc kubenswrapper[4895]: I1206 09:33:47.994373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" event={"ID":"7493d5f0-e00c-4ead-ac85-02955a68017b","Type":"ContainerDied","Data":"7046021a92560591cb74466df0fa7759944b52f4db1369f6d6c9bd8c70dfd919"} Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.586957 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.699451 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-inventory\") pod \"7493d5f0-e00c-4ead-ac85-02955a68017b\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.699519 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5f78\" (UniqueName: \"kubernetes.io/projected/7493d5f0-e00c-4ead-ac85-02955a68017b-kube-api-access-j5f78\") pod \"7493d5f0-e00c-4ead-ac85-02955a68017b\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.699597 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ceph\") pod \"7493d5f0-e00c-4ead-ac85-02955a68017b\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.699812 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ssh-key\") pod \"7493d5f0-e00c-4ead-ac85-02955a68017b\" (UID: \"7493d5f0-e00c-4ead-ac85-02955a68017b\") " Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.706333 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7493d5f0-e00c-4ead-ac85-02955a68017b-kube-api-access-j5f78" (OuterVolumeSpecName: "kube-api-access-j5f78") pod "7493d5f0-e00c-4ead-ac85-02955a68017b" (UID: "7493d5f0-e00c-4ead-ac85-02955a68017b"). InnerVolumeSpecName "kube-api-access-j5f78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.707600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ceph" (OuterVolumeSpecName: "ceph") pod "7493d5f0-e00c-4ead-ac85-02955a68017b" (UID: "7493d5f0-e00c-4ead-ac85-02955a68017b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.733948 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-inventory" (OuterVolumeSpecName: "inventory") pod "7493d5f0-e00c-4ead-ac85-02955a68017b" (UID: "7493d5f0-e00c-4ead-ac85-02955a68017b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.741181 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7493d5f0-e00c-4ead-ac85-02955a68017b" (UID: "7493d5f0-e00c-4ead-ac85-02955a68017b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.802030 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.802060 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.802071 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5f78\" (UniqueName: \"kubernetes.io/projected/7493d5f0-e00c-4ead-ac85-02955a68017b-kube-api-access-j5f78\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:49 crc kubenswrapper[4895]: I1206 09:33:49.802082 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7493d5f0-e00c-4ead-ac85-02955a68017b-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.016125 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" event={"ID":"7493d5f0-e00c-4ead-ac85-02955a68017b","Type":"ContainerDied","Data":"4adcaeb3e018719cab555f7a28d2ece007b2ca5f96009bdc9af88bdee4c79545"} Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.016171 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4adcaeb3e018719cab555f7a28d2ece007b2ca5f96009bdc9af88bdee4c79545" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.016191 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-npp4n" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.141457 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-khl26"] Dec 06 09:33:50 crc kubenswrapper[4895]: E1206 09:33:50.142237 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7493d5f0-e00c-4ead-ac85-02955a68017b" containerName="reboot-os-openstack-openstack-cell1" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.142249 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7493d5f0-e00c-4ead-ac85-02955a68017b" containerName="reboot-os-openstack-openstack-cell1" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.142459 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7493d5f0-e00c-4ead-ac85-02955a68017b" containerName="reboot-os-openstack-openstack-cell1" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.143214 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.146753 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.146980 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.161843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-khl26"] Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311724 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311797 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ceph\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311851 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311878 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311898 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-inventory\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311920 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311954 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk4tv\" (UniqueName: \"kubernetes.io/projected/eb2692b9-1e20-4d6f-87bc-b69888745b52-kube-api-access-pk4tv\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.311987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.312006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.312024 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ssh-key\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.312058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.312090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413530 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413578 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ceph\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413631 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413656 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-inventory\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413700 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413741 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk4tv\" (UniqueName: \"kubernetes.io/projected/eb2692b9-1e20-4d6f-87bc-b69888745b52-kube-api-access-pk4tv\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413773 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413823 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ssh-key\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.413864 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.992902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-inventory\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.992998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.993687 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.994202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ssh-key\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.994768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.995310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.995764 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ceph\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.995789 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.995937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.996502 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.996799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:50 crc kubenswrapper[4895]: I1206 09:33:50.997690 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk4tv\" (UniqueName: \"kubernetes.io/projected/eb2692b9-1e20-4d6f-87bc-b69888745b52-kube-api-access-pk4tv\") pod \"install-certs-openstack-openstack-cell1-khl26\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:51 crc kubenswrapper[4895]: I1206 09:33:51.098381 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:33:51 crc kubenswrapper[4895]: I1206 09:33:51.721670 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-khl26"] Dec 06 09:33:51 crc kubenswrapper[4895]: W1206 09:33:51.726552 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae WatchSource:0}: Error finding container e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae: Status 404 returned error can't find the container with id e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae Dec 06 09:33:52 crc kubenswrapper[4895]: I1206 09:33:52.034832 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-khl26" event={"ID":"eb2692b9-1e20-4d6f-87bc-b69888745b52","Type":"ContainerStarted","Data":"e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae"} Dec 06 09:33:53 crc kubenswrapper[4895]: I1206 09:33:53.051586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-khl26" event={"ID":"eb2692b9-1e20-4d6f-87bc-b69888745b52","Type":"ContainerStarted","Data":"ae3d5dbb01dd63cb9adeef15ebe29c176661d293c6ea06dedb1b299a57b352a4"} Dec 06 09:33:53 crc kubenswrapper[4895]: I1206 09:33:53.074360 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-khl26" podStartSLOduration=2.667055784 podStartE2EDuration="3.07433238s" podCreationTimestamp="2025-12-06 09:33:50 +0000 UTC" firstStartedPulling="2025-12-06 09:33:51.729588012 +0000 UTC m=+9394.130976882" lastFinishedPulling="2025-12-06 09:33:52.136864608 +0000 UTC m=+9394.538253478" observedRunningTime="2025-12-06 09:33:53.073255131 +0000 UTC m=+9395.474644001" watchObservedRunningTime="2025-12-06 09:33:53.07433238 +0000 UTC m=+9395.475721280" Dec 06 09:33:59 crc kubenswrapper[4895]: I1206 09:33:59.696232 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:33:59 crc kubenswrapper[4895]: I1206 09:33:59.697601 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:34:16 crc kubenswrapper[4895]: I1206 09:34:16.294917 4895 generic.go:334] "Generic (PLEG): container finished" podID="eb2692b9-1e20-4d6f-87bc-b69888745b52" containerID="ae3d5dbb01dd63cb9adeef15ebe29c176661d293c6ea06dedb1b299a57b352a4" exitCode=0 Dec 06 09:34:16 crc kubenswrapper[4895]: I1206 09:34:16.294955 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-khl26" event={"ID":"eb2692b9-1e20-4d6f-87bc-b69888745b52","Type":"ContainerDied","Data":"ae3d5dbb01dd63cb9adeef15ebe29c176661d293c6ea06dedb1b299a57b352a4"} Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.793435 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk4tv\" (UniqueName: \"kubernetes.io/projected/eb2692b9-1e20-4d6f-87bc-b69888745b52-kube-api-access-pk4tv\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918728 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ovn-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918754 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-inventory\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918802 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-nova-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918818 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ssh-key\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918870 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-telemetry-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918886 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-bootstrap-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.918931 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-libvirt-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.919040 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-metadata-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.919165 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-sriov-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.919254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-dhcp-combined-ca-bundle\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.919296 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ceph\") pod \"eb2692b9-1e20-4d6f-87bc-b69888745b52\" (UID: \"eb2692b9-1e20-4d6f-87bc-b69888745b52\") " Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.927307 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ceph" (OuterVolumeSpecName: "ceph") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.927347 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.927649 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.927950 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.928028 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.928053 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.928241 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2692b9-1e20-4d6f-87bc-b69888745b52-kube-api-access-pk4tv" (OuterVolumeSpecName: "kube-api-access-pk4tv") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "kube-api-access-pk4tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.928430 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.928903 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.935170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.960047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-inventory" (OuterVolumeSpecName: "inventory") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:17 crc kubenswrapper[4895]: I1206 09:34:17.964000 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb2692b9-1e20-4d6f-87bc-b69888745b52" (UID: "eb2692b9-1e20-4d6f-87bc-b69888745b52"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024154 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024201 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024214 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024224 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk4tv\" (UniqueName: \"kubernetes.io/projected/eb2692b9-1e20-4d6f-87bc-b69888745b52-kube-api-access-pk4tv\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024236 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024248 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024256 4895 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024265 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024274 4895 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024284 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024295 4895 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.024308 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2692b9-1e20-4d6f-87bc-b69888745b52-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.315974 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-khl26" event={"ID":"eb2692b9-1e20-4d6f-87bc-b69888745b52","Type":"ContainerDied","Data":"e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae"} Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.316370 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.316422 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-khl26" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.417451 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-9b6hl"] Dec 06 09:34:18 crc kubenswrapper[4895]: E1206 09:34:18.417925 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2692b9-1e20-4d6f-87bc-b69888745b52" containerName="install-certs-openstack-openstack-cell1" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.417940 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2692b9-1e20-4d6f-87bc-b69888745b52" containerName="install-certs-openstack-openstack-cell1" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.418119 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2692b9-1e20-4d6f-87bc-b69888745b52" containerName="install-certs-openstack-openstack-cell1" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.418906 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.421917 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.422130 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.446274 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-9b6hl"] Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.534463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ceph\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.534548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-inventory\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.534623 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nq45\" (UniqueName: \"kubernetes.io/projected/921a9de4-8fb0-4bd2-a012-26a74c11c465-kube-api-access-7nq45\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.534662 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.636711 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ceph\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.636772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-inventory\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.636868 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nq45\" (UniqueName: \"kubernetes.io/projected/921a9de4-8fb0-4bd2-a012-26a74c11c465-kube-api-access-7nq45\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.636904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.642447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-inventory\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.646162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ceph\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.649222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.657430 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nq45\" (UniqueName: \"kubernetes.io/projected/921a9de4-8fb0-4bd2-a012-26a74c11c465-kube-api-access-7nq45\") pod \"ceph-client-openstack-openstack-cell1-9b6hl\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:18 crc kubenswrapper[4895]: I1206 09:34:18.747847 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:19 crc kubenswrapper[4895]: W1206 09:34:19.312377 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod921a9de4_8fb0_4bd2_a012_26a74c11c465.slice/crio-d315501d53648db6cf435e4a316a21cf2ab049a7ac6a1977ab62ff9ab3090808 WatchSource:0}: Error finding container d315501d53648db6cf435e4a316a21cf2ab049a7ac6a1977ab62ff9ab3090808: Status 404 returned error can't find the container with id d315501d53648db6cf435e4a316a21cf2ab049a7ac6a1977ab62ff9ab3090808 Dec 06 09:34:19 crc kubenswrapper[4895]: I1206 09:34:19.316052 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-9b6hl"] Dec 06 09:34:19 crc kubenswrapper[4895]: I1206 09:34:19.329752 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" event={"ID":"921a9de4-8fb0-4bd2-a012-26a74c11c465","Type":"ContainerStarted","Data":"d315501d53648db6cf435e4a316a21cf2ab049a7ac6a1977ab62ff9ab3090808"} Dec 06 09:34:21 crc kubenswrapper[4895]: I1206 09:34:21.352592 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" event={"ID":"921a9de4-8fb0-4bd2-a012-26a74c11c465","Type":"ContainerStarted","Data":"e0a00fbe775964795b2c4ead1d11376bffd53519dc0272a2803ed7cf4093ee1f"} Dec 06 09:34:21 crc kubenswrapper[4895]: I1206 09:34:21.370721 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" podStartSLOduration=2.588277285 podStartE2EDuration="3.370699991s" podCreationTimestamp="2025-12-06 09:34:18 +0000 UTC" firstStartedPulling="2025-12-06 09:34:19.315201555 +0000 UTC m=+9421.716590425" lastFinishedPulling="2025-12-06 09:34:20.097624261 +0000 UTC m=+9422.499013131" observedRunningTime="2025-12-06 09:34:21.367770541 +0000 UTC m=+9423.769159411" watchObservedRunningTime="2025-12-06 09:34:21.370699991 +0000 UTC m=+9423.772088861" Dec 06 09:34:24 crc kubenswrapper[4895]: E1206 09:34:24.863172 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae\": RecentStats: unable to find data in memory cache]" Dec 06 09:34:26 crc kubenswrapper[4895]: I1206 09:34:26.400228 4895 generic.go:334] "Generic (PLEG): container finished" podID="921a9de4-8fb0-4bd2-a012-26a74c11c465" containerID="e0a00fbe775964795b2c4ead1d11376bffd53519dc0272a2803ed7cf4093ee1f" exitCode=0 Dec 06 09:34:26 crc kubenswrapper[4895]: I1206 09:34:26.400329 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" event={"ID":"921a9de4-8fb0-4bd2-a012-26a74c11c465","Type":"ContainerDied","Data":"e0a00fbe775964795b2c4ead1d11376bffd53519dc0272a2803ed7cf4093ee1f"} Dec 06 09:34:27 crc kubenswrapper[4895]: I1206 09:34:27.913204 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.054086 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nq45\" (UniqueName: \"kubernetes.io/projected/921a9de4-8fb0-4bd2-a012-26a74c11c465-kube-api-access-7nq45\") pod \"921a9de4-8fb0-4bd2-a012-26a74c11c465\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.054145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ssh-key\") pod \"921a9de4-8fb0-4bd2-a012-26a74c11c465\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.054237 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ceph\") pod \"921a9de4-8fb0-4bd2-a012-26a74c11c465\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.054384 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-inventory\") pod \"921a9de4-8fb0-4bd2-a012-26a74c11c465\" (UID: \"921a9de4-8fb0-4bd2-a012-26a74c11c465\") " Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.065024 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921a9de4-8fb0-4bd2-a012-26a74c11c465-kube-api-access-7nq45" (OuterVolumeSpecName: "kube-api-access-7nq45") pod "921a9de4-8fb0-4bd2-a012-26a74c11c465" (UID: "921a9de4-8fb0-4bd2-a012-26a74c11c465"). InnerVolumeSpecName "kube-api-access-7nq45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.065141 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ceph" (OuterVolumeSpecName: "ceph") pod "921a9de4-8fb0-4bd2-a012-26a74c11c465" (UID: "921a9de4-8fb0-4bd2-a012-26a74c11c465"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.095591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-inventory" (OuterVolumeSpecName: "inventory") pod "921a9de4-8fb0-4bd2-a012-26a74c11c465" (UID: "921a9de4-8fb0-4bd2-a012-26a74c11c465"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.096578 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "921a9de4-8fb0-4bd2-a012-26a74c11c465" (UID: "921a9de4-8fb0-4bd2-a012-26a74c11c465"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.156296 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nq45\" (UniqueName: \"kubernetes.io/projected/921a9de4-8fb0-4bd2-a012-26a74c11c465-kube-api-access-7nq45\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.156329 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.156341 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.156349 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921a9de4-8fb0-4bd2-a012-26a74c11c465-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.427629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" event={"ID":"921a9de4-8fb0-4bd2-a012-26a74c11c465","Type":"ContainerDied","Data":"d315501d53648db6cf435e4a316a21cf2ab049a7ac6a1977ab62ff9ab3090808"} Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.427911 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d315501d53648db6cf435e4a316a21cf2ab049a7ac6a1977ab62ff9ab3090808" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.427750 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9b6hl" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.503326 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-xbcjg"] Dec 06 09:34:28 crc kubenswrapper[4895]: E1206 09:34:28.503986 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921a9de4-8fb0-4bd2-a012-26a74c11c465" containerName="ceph-client-openstack-openstack-cell1" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.504011 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="921a9de4-8fb0-4bd2-a012-26a74c11c465" containerName="ceph-client-openstack-openstack-cell1" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.504382 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="921a9de4-8fb0-4bd2-a012-26a74c11c465" containerName="ceph-client-openstack-openstack-cell1" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.505658 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.509233 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.509441 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.513419 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-xbcjg"] Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.668660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ssh-key\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.668706 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-inventory\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.668847 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ceph\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.669169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.669265 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.669386 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl9c5\" (UniqueName: \"kubernetes.io/projected/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-kube-api-access-kl9c5\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.771287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ceph\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.771393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.771423 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.771498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl9c5\" (UniqueName: \"kubernetes.io/projected/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-kube-api-access-kl9c5\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.771541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ssh-key\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.771561 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-inventory\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.773385 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.775998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-inventory\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.776616 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ssh-key\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.777456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.778272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ceph\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.788633 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl9c5\" (UniqueName: \"kubernetes.io/projected/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-kube-api-access-kl9c5\") pod \"ovn-openstack-openstack-cell1-xbcjg\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:28 crc kubenswrapper[4895]: I1206 09:34:28.842161 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:34:29 crc kubenswrapper[4895]: I1206 09:34:29.441623 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-xbcjg"] Dec 06 09:34:29 crc kubenswrapper[4895]: I1206 09:34:29.695967 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:34:29 crc kubenswrapper[4895]: I1206 09:34:29.696388 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:34:29 crc kubenswrapper[4895]: I1206 09:34:29.696436 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:34:29 crc kubenswrapper[4895]: I1206 09:34:29.697498 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:34:29 crc kubenswrapper[4895]: I1206 09:34:29.697575 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" gracePeriod=600 Dec 06 09:34:29 crc kubenswrapper[4895]: E1206 09:34:29.816739 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.457386 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" event={"ID":"8b86acff-6f1c-4129-819e-5bd2cd6b3c83","Type":"ContainerStarted","Data":"fd8836ac9a66dfb9183f30f7e0357d7bc70f64d890a09fe3ec50e4901e3d6901"} Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.457693 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" event={"ID":"8b86acff-6f1c-4129-819e-5bd2cd6b3c83","Type":"ContainerStarted","Data":"ebc0a5b34d4c13e8d7f22f65dbc329f15fb3c40ec0044e429148ff22b661e3c0"} Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.461759 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" exitCode=0 Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.461813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992"} Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.461852 4895 scope.go:117] "RemoveContainer" containerID="ffdc2a939a93f933962849d90f631256edf91007e25ce39191e7dad4620ed7f2" Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.462704 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:34:30 crc kubenswrapper[4895]: E1206 09:34:30.463044 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:34:30 crc kubenswrapper[4895]: I1206 09:34:30.523727 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" podStartSLOduration=2.158701759 podStartE2EDuration="2.523707611s" podCreationTimestamp="2025-12-06 09:34:28 +0000 UTC" firstStartedPulling="2025-12-06 09:34:29.452693781 +0000 UTC m=+9431.854082651" lastFinishedPulling="2025-12-06 09:34:29.817699633 +0000 UTC m=+9432.219088503" observedRunningTime="2025-12-06 09:34:30.480748178 +0000 UTC m=+9432.882137088" watchObservedRunningTime="2025-12-06 09:34:30.523707611 +0000 UTC m=+9432.925096481" Dec 06 09:34:35 crc kubenswrapper[4895]: E1206 09:34:35.101797 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae\": RecentStats: unable to find data in memory cache]" Dec 06 09:34:41 crc kubenswrapper[4895]: I1206 09:34:41.050691 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:34:41 crc kubenswrapper[4895]: E1206 09:34:41.051499 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:34:45 crc kubenswrapper[4895]: E1206 09:34:45.370281 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae\": RecentStats: unable to find data in memory cache]" Dec 06 09:34:52 crc kubenswrapper[4895]: I1206 09:34:52.722977 4895 generic.go:334] "Generic (PLEG): container finished" podID="ac1d9533-8ad3-478c-a262-37339034bc89" containerID="e050ae0d14a20effb692dd3b0d2f63da71107fb6b33a28472cd51dd93c70c7bf" exitCode=0 Dec 06 09:34:52 crc kubenswrapper[4895]: I1206 09:34:52.723085 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-8gtcz" event={"ID":"ac1d9533-8ad3-478c-a262-37339034bc89","Type":"ContainerDied","Data":"e050ae0d14a20effb692dd3b0d2f63da71107fb6b33a28472cd51dd93c70c7bf"} Dec 06 09:34:53 crc kubenswrapper[4895]: I1206 09:34:53.051441 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:34:53 crc kubenswrapper[4895]: E1206 09:34:53.051791 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.204965 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.222893 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ssh-key\") pod \"ac1d9533-8ad3-478c-a262-37339034bc89\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.253886 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac1d9533-8ad3-478c-a262-37339034bc89" (UID: "ac1d9533-8ad3-478c-a262-37339034bc89"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.324703 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-inventory\") pod \"ac1d9533-8ad3-478c-a262-37339034bc89\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.324753 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w75cf\" (UniqueName: \"kubernetes.io/projected/ac1d9533-8ad3-478c-a262-37339034bc89-kube-api-access-w75cf\") pod \"ac1d9533-8ad3-478c-a262-37339034bc89\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.324943 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac1d9533-8ad3-478c-a262-37339034bc89-ovncontroller-config-0\") pod \"ac1d9533-8ad3-478c-a262-37339034bc89\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.325033 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ovn-combined-ca-bundle\") pod \"ac1d9533-8ad3-478c-a262-37339034bc89\" (UID: \"ac1d9533-8ad3-478c-a262-37339034bc89\") " Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.325594 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.328036 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ac1d9533-8ad3-478c-a262-37339034bc89" (UID: "ac1d9533-8ad3-478c-a262-37339034bc89"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.329063 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1d9533-8ad3-478c-a262-37339034bc89-kube-api-access-w75cf" (OuterVolumeSpecName: "kube-api-access-w75cf") pod "ac1d9533-8ad3-478c-a262-37339034bc89" (UID: "ac1d9533-8ad3-478c-a262-37339034bc89"). InnerVolumeSpecName "kube-api-access-w75cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.348149 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d9533-8ad3-478c-a262-37339034bc89-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ac1d9533-8ad3-478c-a262-37339034bc89" (UID: "ac1d9533-8ad3-478c-a262-37339034bc89"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.350852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-inventory" (OuterVolumeSpecName: "inventory") pod "ac1d9533-8ad3-478c-a262-37339034bc89" (UID: "ac1d9533-8ad3-478c-a262-37339034bc89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.427543 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.427576 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac1d9533-8ad3-478c-a262-37339034bc89-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.427586 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w75cf\" (UniqueName: \"kubernetes.io/projected/ac1d9533-8ad3-478c-a262-37339034bc89-kube-api-access-w75cf\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.427595 4895 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac1d9533-8ad3-478c-a262-37339034bc89-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.746508 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-8gtcz" event={"ID":"ac1d9533-8ad3-478c-a262-37339034bc89","Type":"ContainerDied","Data":"bd6481906a97323ec77d0534707e69113a2ed920a09aedda8d69abb32af4cd27"} Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.746549 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6481906a97323ec77d0534707e69113a2ed920a09aedda8d69abb32af4cd27" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.746579 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-8gtcz" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.847808 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-47bmh"] Dec 06 09:34:54 crc kubenswrapper[4895]: E1206 09:34:54.848381 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1d9533-8ad3-478c-a262-37339034bc89" containerName="ovn-openstack-openstack-networker" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.848403 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1d9533-8ad3-478c-a262-37339034bc89" containerName="ovn-openstack-openstack-networker" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.848736 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1d9533-8ad3-478c-a262-37339034bc89" containerName="ovn-openstack-openstack-networker" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.849701 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.855966 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.856347 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-vvkpz" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.856632 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.856806 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.858961 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-47bmh"] Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.939427 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6pc\" (UniqueName: \"kubernetes.io/projected/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-kube-api-access-gj6pc\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.939520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.939622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.939655 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-inventory\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.939707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:54 crc kubenswrapper[4895]: I1206 09:34:54.939765 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.041416 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-inventory\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.041520 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.041589 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.041665 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj6pc\" (UniqueName: \"kubernetes.io/projected/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-kube-api-access-gj6pc\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.041711 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.041783 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.045446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-inventory\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.045825 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.046200 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.046265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.048276 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.063574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj6pc\" (UniqueName: \"kubernetes.io/projected/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-kube-api-access-gj6pc\") pod \"neutron-metadata-openstack-openstack-networker-47bmh\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.176737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:34:55 crc kubenswrapper[4895]: E1206 09:34:55.635449 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae\": RecentStats: unable to find data in memory cache]" Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.740809 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-47bmh"] Dec 06 09:34:55 crc kubenswrapper[4895]: I1206 09:34:55.761507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" event={"ID":"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899","Type":"ContainerStarted","Data":"4dc4ef8538090c240641a076f4510fa2792297c34c13db8cdea23e60330901c0"} Dec 06 09:34:56 crc kubenswrapper[4895]: I1206 09:34:56.777848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" event={"ID":"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899","Type":"ContainerStarted","Data":"c15acac23920cc9b172325f9accdb243ff8405aebd3c902e2923d759274cf8a3"} Dec 06 09:34:56 crc kubenswrapper[4895]: I1206 09:34:56.810889 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" podStartSLOduration=2.403324187 podStartE2EDuration="2.810868631s" podCreationTimestamp="2025-12-06 09:34:54 +0000 UTC" firstStartedPulling="2025-12-06 09:34:55.749039883 +0000 UTC m=+9458.150428753" lastFinishedPulling="2025-12-06 09:34:56.156584317 +0000 UTC m=+9458.557973197" observedRunningTime="2025-12-06 09:34:56.794586176 +0000 UTC m=+9459.195975066" watchObservedRunningTime="2025-12-06 09:34:56.810868631 +0000 UTC m=+9459.212257511" Dec 06 09:35:05 crc kubenswrapper[4895]: I1206 09:35:05.051584 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:35:05 crc kubenswrapper[4895]: E1206 09:35:05.052868 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:35:05 crc kubenswrapper[4895]: E1206 09:35:05.968407 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae\": RecentStats: unable to find data in memory cache]" Dec 06 09:35:16 crc kubenswrapper[4895]: E1206 09:35:16.268014 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2692b9_1e20_4d6f_87bc_b69888745b52.slice/crio-e13a065ed94f85fa8e9cbd385277239ea9017b287f3ac01689fb47dd349f76ae\": RecentStats: unable to find data in memory cache]" Dec 06 09:35:19 crc kubenswrapper[4895]: I1206 09:35:19.051440 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:35:19 crc kubenswrapper[4895]: E1206 09:35:19.052080 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:35:33 crc kubenswrapper[4895]: I1206 09:35:33.051151 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:35:33 crc kubenswrapper[4895]: E1206 09:35:33.051962 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:35:45 crc kubenswrapper[4895]: I1206 09:35:45.051014 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:35:45 crc kubenswrapper[4895]: E1206 09:35:45.051965 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:35:47 crc kubenswrapper[4895]: I1206 09:35:47.343234 4895 generic.go:334] "Generic (PLEG): container finished" podID="8b86acff-6f1c-4129-819e-5bd2cd6b3c83" containerID="fd8836ac9a66dfb9183f30f7e0357d7bc70f64d890a09fe3ec50e4901e3d6901" exitCode=0 Dec 06 09:35:47 crc kubenswrapper[4895]: I1206 09:35:47.343321 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" event={"ID":"8b86acff-6f1c-4129-819e-5bd2cd6b3c83","Type":"ContainerDied","Data":"fd8836ac9a66dfb9183f30f7e0357d7bc70f64d890a09fe3ec50e4901e3d6901"} Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.866540 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.964717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovn-combined-ca-bundle\") pod \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.964796 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ssh-key\") pod \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.965538 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovncontroller-config-0\") pod \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.965882 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ceph\") pod \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.965942 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-inventory\") pod \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.965988 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl9c5\" (UniqueName: \"kubernetes.io/projected/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-kube-api-access-kl9c5\") pod \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\" (UID: \"8b86acff-6f1c-4129-819e-5bd2cd6b3c83\") " Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.974698 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ceph" (OuterVolumeSpecName: "ceph") pod "8b86acff-6f1c-4129-819e-5bd2cd6b3c83" (UID: "8b86acff-6f1c-4129-819e-5bd2cd6b3c83"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.974995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-kube-api-access-kl9c5" (OuterVolumeSpecName: "kube-api-access-kl9c5") pod "8b86acff-6f1c-4129-819e-5bd2cd6b3c83" (UID: "8b86acff-6f1c-4129-819e-5bd2cd6b3c83"). InnerVolumeSpecName "kube-api-access-kl9c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:35:48 crc kubenswrapper[4895]: I1206 09:35:48.976943 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8b86acff-6f1c-4129-819e-5bd2cd6b3c83" (UID: "8b86acff-6f1c-4129-819e-5bd2cd6b3c83"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.002638 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8b86acff-6f1c-4129-819e-5bd2cd6b3c83" (UID: "8b86acff-6f1c-4129-819e-5bd2cd6b3c83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.003713 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8b86acff-6f1c-4129-819e-5bd2cd6b3c83" (UID: "8b86acff-6f1c-4129-819e-5bd2cd6b3c83"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.007258 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-inventory" (OuterVolumeSpecName: "inventory") pod "8b86acff-6f1c-4129-819e-5bd2cd6b3c83" (UID: "8b86acff-6f1c-4129-819e-5bd2cd6b3c83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.068457 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.068513 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.068528 4895 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.068540 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.068550 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.068562 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl9c5\" (UniqueName: \"kubernetes.io/projected/8b86acff-6f1c-4129-819e-5bd2cd6b3c83-kube-api-access-kl9c5\") on node \"crc\" DevicePath \"\"" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.363816 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" event={"ID":"8b86acff-6f1c-4129-819e-5bd2cd6b3c83","Type":"ContainerDied","Data":"ebc0a5b34d4c13e8d7f22f65dbc329f15fb3c40ec0044e429148ff22b661e3c0"} Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.364136 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc0a5b34d4c13e8d7f22f65dbc329f15fb3c40ec0044e429148ff22b661e3c0" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.364016 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-xbcjg" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.478580 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8vldk"] Dec 06 09:35:49 crc kubenswrapper[4895]: E1206 09:35:49.479097 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b86acff-6f1c-4129-819e-5bd2cd6b3c83" containerName="ovn-openstack-openstack-cell1" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.479115 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b86acff-6f1c-4129-819e-5bd2cd6b3c83" containerName="ovn-openstack-openstack-cell1" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.479344 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b86acff-6f1c-4129-819e-5bd2cd6b3c83" containerName="ovn-openstack-openstack-cell1" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.480139 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.487178 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.487419 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.508921 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8vldk"] Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594245 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sml2\" (UniqueName: \"kubernetes.io/projected/b26c3808-8042-49ae-a734-689ec87ec5ed-kube-api-access-9sml2\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594391 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594512 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594650 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.594856 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.697958 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.698019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.698057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.698156 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.698239 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sml2\" (UniqueName: \"kubernetes.io/projected/b26c3808-8042-49ae-a734-689ec87ec5ed-kube-api-access-9sml2\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.698273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.698310 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.702974 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.705167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.706116 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.707194 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.707930 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.709828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.719343 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sml2\" (UniqueName: \"kubernetes.io/projected/b26c3808-8042-49ae-a734-689ec87ec5ed-kube-api-access-9sml2\") pod \"neutron-metadata-openstack-openstack-cell1-8vldk\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:49 crc kubenswrapper[4895]: I1206 09:35:49.824265 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:35:50 crc kubenswrapper[4895]: I1206 09:35:50.416639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8vldk"] Dec 06 09:35:50 crc kubenswrapper[4895]: W1206 09:35:50.808531 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26c3808_8042_49ae_a734_689ec87ec5ed.slice/crio-cdbd2ed78e548b96e55225981d849d1471161782347b83d19d6e25eb5462d50a WatchSource:0}: Error finding container cdbd2ed78e548b96e55225981d849d1471161782347b83d19d6e25eb5462d50a: Status 404 returned error can't find the container with id cdbd2ed78e548b96e55225981d849d1471161782347b83d19d6e25eb5462d50a Dec 06 09:35:50 crc kubenswrapper[4895]: I1206 09:35:50.815326 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:35:51 crc kubenswrapper[4895]: I1206 09:35:51.385786 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" event={"ID":"b26c3808-8042-49ae-a734-689ec87ec5ed","Type":"ContainerStarted","Data":"cdbd2ed78e548b96e55225981d849d1471161782347b83d19d6e25eb5462d50a"} Dec 06 09:35:52 crc kubenswrapper[4895]: I1206 09:35:52.405279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" event={"ID":"b26c3808-8042-49ae-a734-689ec87ec5ed","Type":"ContainerStarted","Data":"d93004fe03106e66d55661e8b0747abb5256ce8d16a291b71c9cb08dc710d8fd"} Dec 06 09:35:52 crc kubenswrapper[4895]: I1206 09:35:52.430813 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" podStartSLOduration=3.0445853449999998 podStartE2EDuration="3.430793511s" podCreationTimestamp="2025-12-06 09:35:49 +0000 UTC" firstStartedPulling="2025-12-06 09:35:50.815065311 +0000 UTC m=+9513.216454181" lastFinishedPulling="2025-12-06 09:35:51.201273477 +0000 UTC m=+9513.602662347" observedRunningTime="2025-12-06 09:35:52.428075458 +0000 UTC m=+9514.829464358" watchObservedRunningTime="2025-12-06 09:35:52.430793511 +0000 UTC m=+9514.832182381" Dec 06 09:35:57 crc kubenswrapper[4895]: I1206 09:35:57.050414 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:35:57 crc kubenswrapper[4895]: E1206 09:35:57.051310 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:36:03 crc kubenswrapper[4895]: I1206 09:36:03.540108 4895 generic.go:334] "Generic (PLEG): container finished" podID="bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" containerID="c15acac23920cc9b172325f9accdb243ff8405aebd3c902e2923d759274cf8a3" exitCode=0 Dec 06 09:36:03 crc kubenswrapper[4895]: I1206 09:36:03.540977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" event={"ID":"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899","Type":"ContainerDied","Data":"c15acac23920cc9b172325f9accdb243ff8405aebd3c902e2923d759274cf8a3"} Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.075943 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.173853 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-metadata-combined-ca-bundle\") pod \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.174155 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-inventory\") pod \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.174192 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-ovn-metadata-agent-neutron-config-0\") pod \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.174278 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-ssh-key\") pod \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.174382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj6pc\" (UniqueName: \"kubernetes.io/projected/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-kube-api-access-gj6pc\") pod \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.174418 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-nova-metadata-neutron-config-0\") pod \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\" (UID: \"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899\") " Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.179970 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-kube-api-access-gj6pc" (OuterVolumeSpecName: "kube-api-access-gj6pc") pod "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" (UID: "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899"). InnerVolumeSpecName "kube-api-access-gj6pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.181342 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" (UID: "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.209517 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-inventory" (OuterVolumeSpecName: "inventory") pod "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" (UID: "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.213946 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" (UID: "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.214568 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" (UID: "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.217454 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" (UID: "bdeb0fdb-eec6-4566-8c47-fbeda5f3c899"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.277735 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.277930 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj6pc\" (UniqueName: \"kubernetes.io/projected/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-kube-api-access-gj6pc\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.277996 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.278052 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.278116 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.278202 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdeb0fdb-eec6-4566-8c47-fbeda5f3c899-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.565113 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" event={"ID":"bdeb0fdb-eec6-4566-8c47-fbeda5f3c899","Type":"ContainerDied","Data":"4dc4ef8538090c240641a076f4510fa2792297c34c13db8cdea23e60330901c0"} Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.565153 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc4ef8538090c240641a076f4510fa2792297c34c13db8cdea23e60330901c0" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.565199 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-47bmh" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.970838 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zzn6z"] Dec 06 09:36:05 crc kubenswrapper[4895]: E1206 09:36:05.971277 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" containerName="neutron-metadata-openstack-openstack-networker" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.971293 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" containerName="neutron-metadata-openstack-openstack-networker" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.971531 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdeb0fdb-eec6-4566-8c47-fbeda5f3c899" containerName="neutron-metadata-openstack-openstack-networker" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.973011 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:05 crc kubenswrapper[4895]: I1206 09:36:05.983590 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzn6z"] Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.094516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxcc\" (UniqueName: \"kubernetes.io/projected/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-kube-api-access-fkxcc\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.094601 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-catalog-content\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.095076 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-utilities\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.196692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-utilities\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.197103 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkxcc\" (UniqueName: \"kubernetes.io/projected/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-kube-api-access-fkxcc\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.197152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-catalog-content\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.197227 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-utilities\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.197594 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-catalog-content\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.217680 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkxcc\" (UniqueName: \"kubernetes.io/projected/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-kube-api-access-fkxcc\") pod \"community-operators-zzn6z\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:06 crc kubenswrapper[4895]: I1206 09:36:06.292355 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:07 crc kubenswrapper[4895]: I1206 09:36:07.022049 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzn6z"] Dec 06 09:36:07 crc kubenswrapper[4895]: W1206 09:36:07.031056 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a9d56b4_860e_45db_8dfc_7aa8b1991dca.slice/crio-f357880a9d10ef99d63aab19b0af6d536611c7aed1bad85e56a2f20773972c30 WatchSource:0}: Error finding container f357880a9d10ef99d63aab19b0af6d536611c7aed1bad85e56a2f20773972c30: Status 404 returned error can't find the container with id f357880a9d10ef99d63aab19b0af6d536611c7aed1bad85e56a2f20773972c30 Dec 06 09:36:07 crc kubenswrapper[4895]: I1206 09:36:07.593350 4895 generic.go:334] "Generic (PLEG): container finished" podID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerID="de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834" exitCode=0 Dec 06 09:36:07 crc kubenswrapper[4895]: I1206 09:36:07.593720 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerDied","Data":"de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834"} Dec 06 09:36:07 crc kubenswrapper[4895]: I1206 09:36:07.593753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerStarted","Data":"f357880a9d10ef99d63aab19b0af6d536611c7aed1bad85e56a2f20773972c30"} Dec 06 09:36:08 crc kubenswrapper[4895]: I1206 09:36:08.061982 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:36:08 crc kubenswrapper[4895]: E1206 09:36:08.062662 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:36:08 crc kubenswrapper[4895]: I1206 09:36:08.605206 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerStarted","Data":"98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83"} Dec 06 09:36:09 crc kubenswrapper[4895]: I1206 09:36:09.618301 4895 generic.go:334] "Generic (PLEG): container finished" podID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerID="98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83" exitCode=0 Dec 06 09:36:09 crc kubenswrapper[4895]: I1206 09:36:09.618410 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerDied","Data":"98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83"} Dec 06 09:36:10 crc kubenswrapper[4895]: I1206 09:36:10.637226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerStarted","Data":"86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b"} Dec 06 09:36:10 crc kubenswrapper[4895]: I1206 09:36:10.673689 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zzn6z" podStartSLOduration=3.189098913 podStartE2EDuration="5.673664577s" podCreationTimestamp="2025-12-06 09:36:05 +0000 UTC" firstStartedPulling="2025-12-06 09:36:07.595979343 +0000 UTC m=+9529.997368213" lastFinishedPulling="2025-12-06 09:36:10.080545007 +0000 UTC m=+9532.481933877" observedRunningTime="2025-12-06 09:36:10.658273333 +0000 UTC m=+9533.059662233" watchObservedRunningTime="2025-12-06 09:36:10.673664577 +0000 UTC m=+9533.075053457" Dec 06 09:36:16 crc kubenswrapper[4895]: I1206 09:36:16.293373 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:16 crc kubenswrapper[4895]: I1206 09:36:16.294138 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:16 crc kubenswrapper[4895]: I1206 09:36:16.368651 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:16 crc kubenswrapper[4895]: I1206 09:36:16.769286 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:16 crc kubenswrapper[4895]: I1206 09:36:16.827940 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzn6z"] Dec 06 09:36:18 crc kubenswrapper[4895]: I1206 09:36:18.744887 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zzn6z" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="registry-server" containerID="cri-o://86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b" gracePeriod=2 Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.229784 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.269935 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkxcc\" (UniqueName: \"kubernetes.io/projected/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-kube-api-access-fkxcc\") pod \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.269997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-utilities\") pod \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.270049 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-catalog-content\") pod \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\" (UID: \"3a9d56b4-860e-45db-8dfc-7aa8b1991dca\") " Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.271753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-utilities" (OuterVolumeSpecName: "utilities") pod "3a9d56b4-860e-45db-8dfc-7aa8b1991dca" (UID: "3a9d56b4-860e-45db-8dfc-7aa8b1991dca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.277896 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-kube-api-access-fkxcc" (OuterVolumeSpecName: "kube-api-access-fkxcc") pod "3a9d56b4-860e-45db-8dfc-7aa8b1991dca" (UID: "3a9d56b4-860e-45db-8dfc-7aa8b1991dca"). InnerVolumeSpecName "kube-api-access-fkxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.328364 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a9d56b4-860e-45db-8dfc-7aa8b1991dca" (UID: "3a9d56b4-860e-45db-8dfc-7aa8b1991dca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.372663 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.372699 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkxcc\" (UniqueName: \"kubernetes.io/projected/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-kube-api-access-fkxcc\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.372710 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9d56b4-860e-45db-8dfc-7aa8b1991dca-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.764269 4895 generic.go:334] "Generic (PLEG): container finished" podID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerID="86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b" exitCode=0 Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.764340 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerDied","Data":"86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b"} Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.764777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzn6z" event={"ID":"3a9d56b4-860e-45db-8dfc-7aa8b1991dca","Type":"ContainerDied","Data":"f357880a9d10ef99d63aab19b0af6d536611c7aed1bad85e56a2f20773972c30"} Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.764366 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzn6z" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.764857 4895 scope.go:117] "RemoveContainer" containerID="86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.798745 4895 scope.go:117] "RemoveContainer" containerID="98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.819610 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzn6z"] Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.829462 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zzn6z"] Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.833062 4895 scope.go:117] "RemoveContainer" containerID="de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.882114 4895 scope.go:117] "RemoveContainer" containerID="86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b" Dec 06 09:36:19 crc kubenswrapper[4895]: E1206 09:36:19.882894 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b\": container with ID starting with 86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b not found: ID does not exist" containerID="86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.882974 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b"} err="failed to get container status \"86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b\": rpc error: code = NotFound desc = could not find container \"86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b\": container with ID starting with 86b64fe8b006ee79a1853709ab4b8a28e3406cccd95bcf2910a71211f27e545b not found: ID does not exist" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.883022 4895 scope.go:117] "RemoveContainer" containerID="98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83" Dec 06 09:36:19 crc kubenswrapper[4895]: E1206 09:36:19.883462 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83\": container with ID starting with 98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83 not found: ID does not exist" containerID="98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.883542 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83"} err="failed to get container status \"98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83\": rpc error: code = NotFound desc = could not find container \"98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83\": container with ID starting with 98488359f2526543ae24e307ba66623172f3c0104adadfe740d9ea1ee92f5e83 not found: ID does not exist" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.883571 4895 scope.go:117] "RemoveContainer" containerID="de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834" Dec 06 09:36:19 crc kubenswrapper[4895]: E1206 09:36:19.883939 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834\": container with ID starting with de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834 not found: ID does not exist" containerID="de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834" Dec 06 09:36:19 crc kubenswrapper[4895]: I1206 09:36:19.883984 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834"} err="failed to get container status \"de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834\": rpc error: code = NotFound desc = could not find container \"de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834\": container with ID starting with de69de928d1a01d919ac2c915a140253234e1ee23af9ba4ca95c764cbf078834 not found: ID does not exist" Dec 06 09:36:20 crc kubenswrapper[4895]: I1206 09:36:20.064434 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" path="/var/lib/kubelet/pods/3a9d56b4-860e-45db-8dfc-7aa8b1991dca/volumes" Dec 06 09:36:21 crc kubenswrapper[4895]: I1206 09:36:21.050235 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:36:21 crc kubenswrapper[4895]: E1206 09:36:21.050582 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:36:34 crc kubenswrapper[4895]: I1206 09:36:34.052312 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:36:34 crc kubenswrapper[4895]: E1206 09:36:34.053903 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:36:47 crc kubenswrapper[4895]: I1206 09:36:47.051052 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:36:47 crc kubenswrapper[4895]: E1206 09:36:47.052000 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:36:48 crc kubenswrapper[4895]: I1206 09:36:48.097534 4895 generic.go:334] "Generic (PLEG): container finished" podID="b26c3808-8042-49ae-a734-689ec87ec5ed" containerID="d93004fe03106e66d55661e8b0747abb5256ce8d16a291b71c9cb08dc710d8fd" exitCode=0 Dec 06 09:36:48 crc kubenswrapper[4895]: I1206 09:36:48.097599 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" event={"ID":"b26c3808-8042-49ae-a734-689ec87ec5ed","Type":"ContainerDied","Data":"d93004fe03106e66d55661e8b0747abb5256ce8d16a291b71c9cb08dc710d8fd"} Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.597814 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ssh-key\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697345 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sml2\" (UniqueName: \"kubernetes.io/projected/b26c3808-8042-49ae-a734-689ec87ec5ed-kube-api-access-9sml2\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697370 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-inventory\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697519 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-metadata-combined-ca-bundle\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ceph\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697628 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-nova-metadata-neutron-config-0\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.697750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-ovn-metadata-agent-neutron-config-0\") pod \"b26c3808-8042-49ae-a734-689ec87ec5ed\" (UID: \"b26c3808-8042-49ae-a734-689ec87ec5ed\") " Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.702963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ceph" (OuterVolumeSpecName: "ceph") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.705787 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.706866 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26c3808-8042-49ae-a734-689ec87ec5ed-kube-api-access-9sml2" (OuterVolumeSpecName: "kube-api-access-9sml2") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "kube-api-access-9sml2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.725276 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.729525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.730693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.746530 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-inventory" (OuterVolumeSpecName: "inventory") pod "b26c3808-8042-49ae-a734-689ec87ec5ed" (UID: "b26c3808-8042-49ae-a734-689ec87ec5ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805274 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805561 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805666 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805760 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805839 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805915 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sml2\" (UniqueName: \"kubernetes.io/projected/b26c3808-8042-49ae-a734-689ec87ec5ed-kube-api-access-9sml2\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:49 crc kubenswrapper[4895]: I1206 09:36:49.805999 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26c3808-8042-49ae-a734-689ec87ec5ed-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.121832 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" event={"ID":"b26c3808-8042-49ae-a734-689ec87ec5ed","Type":"ContainerDied","Data":"cdbd2ed78e548b96e55225981d849d1471161782347b83d19d6e25eb5462d50a"} Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.121879 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdbd2ed78e548b96e55225981d849d1471161782347b83d19d6e25eb5462d50a" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.121906 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8vldk" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.260398 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-m2m7q"] Dec 06 09:36:50 crc kubenswrapper[4895]: E1206 09:36:50.260938 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="extract-utilities" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.260955 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="extract-utilities" Dec 06 09:36:50 crc kubenswrapper[4895]: E1206 09:36:50.260977 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="registry-server" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.260983 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="registry-server" Dec 06 09:36:50 crc kubenswrapper[4895]: E1206 09:36:50.261003 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26c3808-8042-49ae-a734-689ec87ec5ed" containerName="neutron-metadata-openstack-openstack-cell1" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.261010 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26c3808-8042-49ae-a734-689ec87ec5ed" containerName="neutron-metadata-openstack-openstack-cell1" Dec 06 09:36:50 crc kubenswrapper[4895]: E1206 09:36:50.261020 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="extract-content" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.261026 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="extract-content" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.261237 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26c3808-8042-49ae-a734-689ec87ec5ed" containerName="neutron-metadata-openstack-openstack-cell1" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.261280 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9d56b4-860e-45db-8dfc-7aa8b1991dca" containerName="registry-server" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.262310 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.265728 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.265909 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.266129 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.266185 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.266357 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.270067 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-m2m7q"] Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.316305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-inventory\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.316451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ssh-key\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.316541 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6g9h\" (UniqueName: \"kubernetes.io/projected/64076080-572e-4d67-af02-2cdeb8113b9f-kube-api-access-v6g9h\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.316588 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.316741 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ceph\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.316775 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.418716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ceph\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.418772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.418869 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-inventory\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.418957 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ssh-key\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.418987 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6g9h\" (UniqueName: \"kubernetes.io/projected/64076080-572e-4d67-af02-2cdeb8113b9f-kube-api-access-v6g9h\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.419026 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.593556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-inventory\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.593781 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.594312 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.594362 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ssh-key\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.594394 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ceph\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.594489 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6g9h\" (UniqueName: \"kubernetes.io/projected/64076080-572e-4d67-af02-2cdeb8113b9f-kube-api-access-v6g9h\") pod \"libvirt-openstack-openstack-cell1-m2m7q\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:50 crc kubenswrapper[4895]: I1206 09:36:50.893582 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:36:51 crc kubenswrapper[4895]: I1206 09:36:51.458351 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-m2m7q"] Dec 06 09:36:52 crc kubenswrapper[4895]: I1206 09:36:52.180659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" event={"ID":"64076080-572e-4d67-af02-2cdeb8113b9f","Type":"ContainerStarted","Data":"012705675eabbb5dd4fae3a2366b6e00fef872add783d2f570073f5799497168"} Dec 06 09:36:53 crc kubenswrapper[4895]: I1206 09:36:53.193881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" event={"ID":"64076080-572e-4d67-af02-2cdeb8113b9f","Type":"ContainerStarted","Data":"40e0286a9244ed105106a1c1bcba10023f7a97bd156f3b47f0f283c9613d2698"} Dec 06 09:36:53 crc kubenswrapper[4895]: I1206 09:36:53.218063 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" podStartSLOduration=2.724592309 podStartE2EDuration="3.218035065s" podCreationTimestamp="2025-12-06 09:36:50 +0000 UTC" firstStartedPulling="2025-12-06 09:36:51.475356538 +0000 UTC m=+9573.876745398" lastFinishedPulling="2025-12-06 09:36:51.968799284 +0000 UTC m=+9574.370188154" observedRunningTime="2025-12-06 09:36:53.21300867 +0000 UTC m=+9575.614397540" watchObservedRunningTime="2025-12-06 09:36:53.218035065 +0000 UTC m=+9575.619423935" Dec 06 09:37:02 crc kubenswrapper[4895]: I1206 09:37:02.050923 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:37:02 crc kubenswrapper[4895]: E1206 09:37:02.051730 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:37:13 crc kubenswrapper[4895]: I1206 09:37:13.051030 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:37:13 crc kubenswrapper[4895]: E1206 09:37:13.051804 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:37:25 crc kubenswrapper[4895]: I1206 09:37:25.052658 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:37:25 crc kubenswrapper[4895]: E1206 09:37:25.053378 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:37:36 crc kubenswrapper[4895]: I1206 09:37:36.051082 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:37:36 crc kubenswrapper[4895]: E1206 09:37:36.051972 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:37:48 crc kubenswrapper[4895]: I1206 09:37:48.061172 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:37:48 crc kubenswrapper[4895]: E1206 09:37:48.062219 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.338612 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5f266"] Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.341903 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.348571 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5f266"] Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.448355 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9ddh\" (UniqueName: \"kubernetes.io/projected/29f9a266-c01c-41fe-b774-563224ea457f-kube-api-access-j9ddh\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.448509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-utilities\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.448607 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-catalog-content\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.551108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-utilities\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.551606 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-utilities\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.551690 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-catalog-content\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.551747 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9ddh\" (UniqueName: \"kubernetes.io/projected/29f9a266-c01c-41fe-b774-563224ea457f-kube-api-access-j9ddh\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.552642 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-catalog-content\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.577751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9ddh\" (UniqueName: \"kubernetes.io/projected/29f9a266-c01c-41fe-b774-563224ea457f-kube-api-access-j9ddh\") pod \"certified-operators-5f266\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:58 crc kubenswrapper[4895]: I1206 09:37:58.685095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:37:59 crc kubenswrapper[4895]: I1206 09:37:59.050290 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:37:59 crc kubenswrapper[4895]: E1206 09:37:59.050891 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:37:59 crc kubenswrapper[4895]: I1206 09:37:59.183595 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5f266"] Dec 06 09:37:59 crc kubenswrapper[4895]: I1206 09:37:59.928547 4895 generic.go:334] "Generic (PLEG): container finished" podID="29f9a266-c01c-41fe-b774-563224ea457f" containerID="31f01a3dd63e7dca5719dece3adab4aa9d1f21c1632652f7476cdc5f00ee1f88" exitCode=0 Dec 06 09:37:59 crc kubenswrapper[4895]: I1206 09:37:59.928634 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerDied","Data":"31f01a3dd63e7dca5719dece3adab4aa9d1f21c1632652f7476cdc5f00ee1f88"} Dec 06 09:37:59 crc kubenswrapper[4895]: I1206 09:37:59.928923 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerStarted","Data":"8c9f9d224783b6b7f608b0542d1cf2f413670ab98f0a38c6bf5e980600c7b0f6"} Dec 06 09:38:00 crc kubenswrapper[4895]: I1206 09:38:00.943379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerStarted","Data":"0e0fcc3c15a3a74bc143c2927b1089dd5318fd649f329dd0c8a81076d19fffad"} Dec 06 09:38:01 crc kubenswrapper[4895]: I1206 09:38:01.956434 4895 generic.go:334] "Generic (PLEG): container finished" podID="29f9a266-c01c-41fe-b774-563224ea457f" containerID="0e0fcc3c15a3a74bc143c2927b1089dd5318fd649f329dd0c8a81076d19fffad" exitCode=0 Dec 06 09:38:01 crc kubenswrapper[4895]: I1206 09:38:01.956509 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerDied","Data":"0e0fcc3c15a3a74bc143c2927b1089dd5318fd649f329dd0c8a81076d19fffad"} Dec 06 09:38:02 crc kubenswrapper[4895]: I1206 09:38:02.969539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerStarted","Data":"c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686"} Dec 06 09:38:02 crc kubenswrapper[4895]: I1206 09:38:02.998698 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5f266" podStartSLOduration=2.345094282 podStartE2EDuration="4.998677674s" podCreationTimestamp="2025-12-06 09:37:58 +0000 UTC" firstStartedPulling="2025-12-06 09:37:59.930280634 +0000 UTC m=+9642.331669504" lastFinishedPulling="2025-12-06 09:38:02.583864036 +0000 UTC m=+9644.985252896" observedRunningTime="2025-12-06 09:38:02.991015987 +0000 UTC m=+9645.392404867" watchObservedRunningTime="2025-12-06 09:38:02.998677674 +0000 UTC m=+9645.400066544" Dec 06 09:38:08 crc kubenswrapper[4895]: I1206 09:38:08.685868 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:38:08 crc kubenswrapper[4895]: I1206 09:38:08.687188 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:38:08 crc kubenswrapper[4895]: I1206 09:38:08.745136 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:38:09 crc kubenswrapper[4895]: I1206 09:38:09.101925 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:38:09 crc kubenswrapper[4895]: I1206 09:38:09.148889 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5f266"] Dec 06 09:38:10 crc kubenswrapper[4895]: I1206 09:38:10.051018 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:38:10 crc kubenswrapper[4895]: E1206 09:38:10.051331 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:38:11 crc kubenswrapper[4895]: I1206 09:38:11.044162 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5f266" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="registry-server" containerID="cri-o://c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686" gracePeriod=2 Dec 06 09:38:11 crc kubenswrapper[4895]: E1206 09:38:11.155229 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f9a266_c01c_41fe_b774_563224ea457f.slice/crio-c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f9a266_c01c_41fe_b774_563224ea457f.slice/crio-conmon-c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686.scope\": RecentStats: unable to find data in memory cache]" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.073597 4895 generic.go:334] "Generic (PLEG): container finished" podID="29f9a266-c01c-41fe-b774-563224ea457f" containerID="c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686" exitCode=0 Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.073661 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerDied","Data":"c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686"} Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.073992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f266" event={"ID":"29f9a266-c01c-41fe-b774-563224ea457f","Type":"ContainerDied","Data":"8c9f9d224783b6b7f608b0542d1cf2f413670ab98f0a38c6bf5e980600c7b0f6"} Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.074033 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9f9d224783b6b7f608b0542d1cf2f413670ab98f0a38c6bf5e980600c7b0f6" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.123886 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.257006 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-utilities\") pod \"29f9a266-c01c-41fe-b774-563224ea457f\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.257401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9ddh\" (UniqueName: \"kubernetes.io/projected/29f9a266-c01c-41fe-b774-563224ea457f-kube-api-access-j9ddh\") pod \"29f9a266-c01c-41fe-b774-563224ea457f\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.257487 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-catalog-content\") pod \"29f9a266-c01c-41fe-b774-563224ea457f\" (UID: \"29f9a266-c01c-41fe-b774-563224ea457f\") " Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.257983 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-utilities" (OuterVolumeSpecName: "utilities") pod "29f9a266-c01c-41fe-b774-563224ea457f" (UID: "29f9a266-c01c-41fe-b774-563224ea457f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.262051 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f9a266-c01c-41fe-b774-563224ea457f-kube-api-access-j9ddh" (OuterVolumeSpecName: "kube-api-access-j9ddh") pod "29f9a266-c01c-41fe-b774-563224ea457f" (UID: "29f9a266-c01c-41fe-b774-563224ea457f"). InnerVolumeSpecName "kube-api-access-j9ddh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.306276 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29f9a266-c01c-41fe-b774-563224ea457f" (UID: "29f9a266-c01c-41fe-b774-563224ea457f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.360273 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.360324 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9ddh\" (UniqueName: \"kubernetes.io/projected/29f9a266-c01c-41fe-b774-563224ea457f-kube-api-access-j9ddh\") on node \"crc\" DevicePath \"\"" Dec 06 09:38:12 crc kubenswrapper[4895]: I1206 09:38:12.360335 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f9a266-c01c-41fe-b774-563224ea457f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:38:13 crc kubenswrapper[4895]: I1206 09:38:13.086368 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f266" Dec 06 09:38:13 crc kubenswrapper[4895]: I1206 09:38:13.125163 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5f266"] Dec 06 09:38:13 crc kubenswrapper[4895]: I1206 09:38:13.134986 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5f266"] Dec 06 09:38:14 crc kubenswrapper[4895]: I1206 09:38:14.064789 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f9a266-c01c-41fe-b774-563224ea457f" path="/var/lib/kubelet/pods/29f9a266-c01c-41fe-b774-563224ea457f/volumes" Dec 06 09:38:23 crc kubenswrapper[4895]: I1206 09:38:23.050574 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:38:23 crc kubenswrapper[4895]: E1206 09:38:23.051227 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:38:38 crc kubenswrapper[4895]: I1206 09:38:38.065270 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:38:38 crc kubenswrapper[4895]: E1206 09:38:38.066359 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:38:50 crc kubenswrapper[4895]: I1206 09:38:50.051335 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:38:50 crc kubenswrapper[4895]: E1206 09:38:50.052202 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:39:01 crc kubenswrapper[4895]: I1206 09:39:01.051286 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:39:01 crc kubenswrapper[4895]: E1206 09:39:01.052021 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:39:14 crc kubenswrapper[4895]: I1206 09:39:14.051686 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:39:14 crc kubenswrapper[4895]: E1206 09:39:14.053096 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:39:29 crc kubenswrapper[4895]: I1206 09:39:29.051548 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:39:29 crc kubenswrapper[4895]: E1206 09:39:29.052543 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:39:43 crc kubenswrapper[4895]: I1206 09:39:43.051288 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:39:44 crc kubenswrapper[4895]: I1206 09:39:44.134171 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"f2cdac50a6b93807e360cc9e6793f3ec3862d9912b78e2d09e34acdc1f9bc06d"} Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.806851 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-45sxb"] Dec 06 09:40:27 crc kubenswrapper[4895]: E1206 09:40:27.814992 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="extract-utilities" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.815034 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="extract-utilities" Dec 06 09:40:27 crc kubenswrapper[4895]: E1206 09:40:27.815050 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="registry-server" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.815061 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="registry-server" Dec 06 09:40:27 crc kubenswrapper[4895]: E1206 09:40:27.815091 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="extract-content" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.815100 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="extract-content" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.815377 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f9a266-c01c-41fe-b774-563224ea457f" containerName="registry-server" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.817529 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45sxb"] Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.817624 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.951234 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-catalog-content\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.951434 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-utilities\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:27 crc kubenswrapper[4895]: I1206 09:40:27.951494 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65sh\" (UniqueName: \"kubernetes.io/projected/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-kube-api-access-h65sh\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.053727 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-catalog-content\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.053838 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-utilities\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.053861 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65sh\" (UniqueName: \"kubernetes.io/projected/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-kube-api-access-h65sh\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.058222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-catalog-content\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.060827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-utilities\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.112425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65sh\" (UniqueName: \"kubernetes.io/projected/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-kube-api-access-h65sh\") pod \"redhat-operators-45sxb\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.143899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:28 crc kubenswrapper[4895]: I1206 09:40:28.730697 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45sxb"] Dec 06 09:40:29 crc kubenswrapper[4895]: I1206 09:40:29.654800 4895 generic.go:334] "Generic (PLEG): container finished" podID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerID="b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6" exitCode=0 Dec 06 09:40:29 crc kubenswrapper[4895]: I1206 09:40:29.654918 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerDied","Data":"b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6"} Dec 06 09:40:29 crc kubenswrapper[4895]: I1206 09:40:29.655144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerStarted","Data":"f3cb5478529e7ec9f72ef00b53610a26d6b06d13356bed626ed47423cbcd1538"} Dec 06 09:40:30 crc kubenswrapper[4895]: I1206 09:40:30.668599 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerStarted","Data":"4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20"} Dec 06 09:40:33 crc kubenswrapper[4895]: I1206 09:40:33.713014 4895 generic.go:334] "Generic (PLEG): container finished" podID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerID="4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20" exitCode=0 Dec 06 09:40:33 crc kubenswrapper[4895]: I1206 09:40:33.713111 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerDied","Data":"4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20"} Dec 06 09:40:34 crc kubenswrapper[4895]: I1206 09:40:34.727969 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerStarted","Data":"dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb"} Dec 06 09:40:34 crc kubenswrapper[4895]: I1206 09:40:34.752884 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-45sxb" podStartSLOduration=2.991483928 podStartE2EDuration="7.752862714s" podCreationTimestamp="2025-12-06 09:40:27 +0000 UTC" firstStartedPulling="2025-12-06 09:40:29.658646269 +0000 UTC m=+9792.060035139" lastFinishedPulling="2025-12-06 09:40:34.420025055 +0000 UTC m=+9796.821413925" observedRunningTime="2025-12-06 09:40:34.750739387 +0000 UTC m=+9797.152128347" watchObservedRunningTime="2025-12-06 09:40:34.752862714 +0000 UTC m=+9797.154251584" Dec 06 09:40:38 crc kubenswrapper[4895]: I1206 09:40:38.144557 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:38 crc kubenswrapper[4895]: I1206 09:40:38.145819 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:39 crc kubenswrapper[4895]: I1206 09:40:39.190825 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-45sxb" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="registry-server" probeResult="failure" output=< Dec 06 09:40:39 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:40:39 crc kubenswrapper[4895]: > Dec 06 09:40:48 crc kubenswrapper[4895]: I1206 09:40:48.191078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:48 crc kubenswrapper[4895]: I1206 09:40:48.244873 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:48 crc kubenswrapper[4895]: I1206 09:40:48.425504 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45sxb"] Dec 06 09:40:49 crc kubenswrapper[4895]: I1206 09:40:49.914447 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-45sxb" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="registry-server" containerID="cri-o://dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb" gracePeriod=2 Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.405206 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.578586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h65sh\" (UniqueName: \"kubernetes.io/projected/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-kube-api-access-h65sh\") pod \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.578686 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-utilities\") pod \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.578828 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-catalog-content\") pod \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\" (UID: \"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f\") " Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.579931 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-utilities" (OuterVolumeSpecName: "utilities") pod "61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" (UID: "61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.588686 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-kube-api-access-h65sh" (OuterVolumeSpecName: "kube-api-access-h65sh") pod "61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" (UID: "61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f"). InnerVolumeSpecName "kube-api-access-h65sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.686181 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h65sh\" (UniqueName: \"kubernetes.io/projected/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-kube-api-access-h65sh\") on node \"crc\" DevicePath \"\"" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.686231 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.712070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" (UID: "61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.787826 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.928548 4895 generic.go:334] "Generic (PLEG): container finished" podID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerID="dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb" exitCode=0 Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.928627 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45sxb" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.928654 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerDied","Data":"dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb"} Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.929721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45sxb" event={"ID":"61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f","Type":"ContainerDied","Data":"f3cb5478529e7ec9f72ef00b53610a26d6b06d13356bed626ed47423cbcd1538"} Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.929748 4895 scope.go:117] "RemoveContainer" containerID="dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.970058 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45sxb"] Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.976814 4895 scope.go:117] "RemoveContainer" containerID="4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20" Dec 06 09:40:50 crc kubenswrapper[4895]: I1206 09:40:50.982021 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-45sxb"] Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.011237 4895 scope.go:117] "RemoveContainer" containerID="b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6" Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.054039 4895 scope.go:117] "RemoveContainer" containerID="dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb" Dec 06 09:40:51 crc kubenswrapper[4895]: E1206 09:40:51.054559 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb\": container with ID starting with dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb not found: ID does not exist" containerID="dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb" Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.054598 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb"} err="failed to get container status \"dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb\": rpc error: code = NotFound desc = could not find container \"dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb\": container with ID starting with dd29586613494f763c555cf2205ab15e6c63038d186dca0db6bc8c15c9ea74cb not found: ID does not exist" Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.054619 4895 scope.go:117] "RemoveContainer" containerID="4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20" Dec 06 09:40:51 crc kubenswrapper[4895]: E1206 09:40:51.056639 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20\": container with ID starting with 4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20 not found: ID does not exist" containerID="4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20" Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.056796 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20"} err="failed to get container status \"4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20\": rpc error: code = NotFound desc = could not find container \"4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20\": container with ID starting with 4470f9735da4a22148f433822e7594b95c1abdde8db1409f0eeb09717f754a20 not found: ID does not exist" Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.056898 4895 scope.go:117] "RemoveContainer" containerID="b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6" Dec 06 09:40:51 crc kubenswrapper[4895]: E1206 09:40:51.063583 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6\": container with ID starting with b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6 not found: ID does not exist" containerID="b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6" Dec 06 09:40:51 crc kubenswrapper[4895]: I1206 09:40:51.063663 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6"} err="failed to get container status \"b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6\": rpc error: code = NotFound desc = could not find container \"b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6\": container with ID starting with b0cd4e3f5e1197f7e66cb50b8d0e6170d7b15a6c9787f4cc4d6ef0c3d23e7aa6 not found: ID does not exist" Dec 06 09:40:52 crc kubenswrapper[4895]: I1206 09:40:52.066170 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" path="/var/lib/kubelet/pods/61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f/volumes" Dec 06 09:41:59 crc kubenswrapper[4895]: I1206 09:41:59.695973 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:41:59 crc kubenswrapper[4895]: I1206 09:41:59.696854 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:42:04 crc kubenswrapper[4895]: I1206 09:42:04.792743 4895 generic.go:334] "Generic (PLEG): container finished" podID="64076080-572e-4d67-af02-2cdeb8113b9f" containerID="40e0286a9244ed105106a1c1bcba10023f7a97bd156f3b47f0f283c9613d2698" exitCode=0 Dec 06 09:42:04 crc kubenswrapper[4895]: I1206 09:42:04.792920 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" event={"ID":"64076080-572e-4d67-af02-2cdeb8113b9f","Type":"ContainerDied","Data":"40e0286a9244ed105106a1c1bcba10023f7a97bd156f3b47f0f283c9613d2698"} Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.161387 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.284526 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ssh-key\") pod \"64076080-572e-4d67-af02-2cdeb8113b9f\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.284975 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ceph\") pod \"64076080-572e-4d67-af02-2cdeb8113b9f\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.285224 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-secret-0\") pod \"64076080-572e-4d67-af02-2cdeb8113b9f\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.285270 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6g9h\" (UniqueName: \"kubernetes.io/projected/64076080-572e-4d67-af02-2cdeb8113b9f-kube-api-access-v6g9h\") pod \"64076080-572e-4d67-af02-2cdeb8113b9f\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.285309 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-combined-ca-bundle\") pod \"64076080-572e-4d67-af02-2cdeb8113b9f\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.285374 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-inventory\") pod \"64076080-572e-4d67-af02-2cdeb8113b9f\" (UID: \"64076080-572e-4d67-af02-2cdeb8113b9f\") " Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.291790 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ceph" (OuterVolumeSpecName: "ceph") pod "64076080-572e-4d67-af02-2cdeb8113b9f" (UID: "64076080-572e-4d67-af02-2cdeb8113b9f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.291949 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "64076080-572e-4d67-af02-2cdeb8113b9f" (UID: "64076080-572e-4d67-af02-2cdeb8113b9f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.292009 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64076080-572e-4d67-af02-2cdeb8113b9f-kube-api-access-v6g9h" (OuterVolumeSpecName: "kube-api-access-v6g9h") pod "64076080-572e-4d67-af02-2cdeb8113b9f" (UID: "64076080-572e-4d67-af02-2cdeb8113b9f"). InnerVolumeSpecName "kube-api-access-v6g9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.388489 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.388523 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6g9h\" (UniqueName: \"kubernetes.io/projected/64076080-572e-4d67-af02-2cdeb8113b9f-kube-api-access-v6g9h\") on node \"crc\" DevicePath \"\"" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.388537 4895 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.887200 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" event={"ID":"64076080-572e-4d67-af02-2cdeb8113b9f","Type":"ContainerDied","Data":"012705675eabbb5dd4fae3a2366b6e00fef872add783d2f570073f5799497168"} Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.887541 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="012705675eabbb5dd4fae3a2366b6e00fef872add783d2f570073f5799497168" Dec 06 09:42:07 crc kubenswrapper[4895]: I1206 09:42:07.887718 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-m2m7q" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.118684 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "64076080-572e-4d67-af02-2cdeb8113b9f" (UID: "64076080-572e-4d67-af02-2cdeb8113b9f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.215434 4895 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.241389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-inventory" (OuterVolumeSpecName: "inventory") pod "64076080-572e-4d67-af02-2cdeb8113b9f" (UID: "64076080-572e-4d67-af02-2cdeb8113b9f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.245203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "64076080-572e-4d67-af02-2cdeb8113b9f" (UID: "64076080-572e-4d67-af02-2cdeb8113b9f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.300964 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-gq6jn"] Dec 06 09:42:08 crc kubenswrapper[4895]: E1206 09:42:08.301575 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64076080-572e-4d67-af02-2cdeb8113b9f" containerName="libvirt-openstack-openstack-cell1" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.301601 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="64076080-572e-4d67-af02-2cdeb8113b9f" containerName="libvirt-openstack-openstack-cell1" Dec 06 09:42:08 crc kubenswrapper[4895]: E1206 09:42:08.301621 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="extract-utilities" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.301631 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="extract-utilities" Dec 06 09:42:08 crc kubenswrapper[4895]: E1206 09:42:08.301663 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="registry-server" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.301671 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="registry-server" Dec 06 09:42:08 crc kubenswrapper[4895]: E1206 09:42:08.301685 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="extract-content" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.301694 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="extract-content" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.301967 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fa94f2-d690-4d26-b7ee-5bb4fd4ee93f" containerName="registry-server" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.302015 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="64076080-572e-4d67-af02-2cdeb8113b9f" containerName="libvirt-openstack-openstack-cell1" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.303010 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.305120 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-gq6jn"] Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.308775 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.309140 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.309311 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.318018 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.318053 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64076080-572e-4d67-af02-2cdeb8113b9f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.419920 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ceph\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.419990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420533 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn9c\" (UniqueName: \"kubernetes.io/projected/17bdd4b3-f731-4190-9958-689486a88f30-kube-api-access-kxn9c\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420906 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-inventory\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420946 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.420993 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.421018 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.421047 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.523104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.523981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524126 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxn9c\" (UniqueName: \"kubernetes.io/projected/17bdd4b3-f731-4190-9958-689486a88f30-kube-api-access-kxn9c\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524198 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524250 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524686 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-inventory\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524713 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524738 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ceph\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.524816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.525692 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.529003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.529193 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.537044 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.537197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.537516 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.538471 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ceph\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.540899 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-inventory\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.542078 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.544571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxn9c\" (UniqueName: \"kubernetes.io/projected/17bdd4b3-f731-4190-9958-689486a88f30-kube-api-access-kxn9c\") pod \"nova-cell1-openstack-openstack-cell1-gq6jn\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:08 crc kubenswrapper[4895]: I1206 09:42:08.624973 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:42:09 crc kubenswrapper[4895]: I1206 09:42:09.269193 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:42:09 crc kubenswrapper[4895]: I1206 09:42:09.283379 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-gq6jn"] Dec 06 09:42:09 crc kubenswrapper[4895]: I1206 09:42:09.914228 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" event={"ID":"17bdd4b3-f731-4190-9958-689486a88f30","Type":"ContainerStarted","Data":"e325068311f24e6c61eb2deb9a1804287381255ce3c9faf7b0894b526e560acd"} Dec 06 09:42:10 crc kubenswrapper[4895]: I1206 09:42:10.931084 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" event={"ID":"17bdd4b3-f731-4190-9958-689486a88f30","Type":"ContainerStarted","Data":"a6db31438557a5e3e5c48bd4ac3fe6219bed998c461011978cbe80f1360db0d3"} Dec 06 09:42:10 crc kubenswrapper[4895]: I1206 09:42:10.969889 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" podStartSLOduration=2.580083749 podStartE2EDuration="2.969856811s" podCreationTimestamp="2025-12-06 09:42:08 +0000 UTC" firstStartedPulling="2025-12-06 09:42:09.268863547 +0000 UTC m=+9891.670252417" lastFinishedPulling="2025-12-06 09:42:09.658636579 +0000 UTC m=+9892.060025479" observedRunningTime="2025-12-06 09:42:10.957062276 +0000 UTC m=+9893.358451186" watchObservedRunningTime="2025-12-06 09:42:10.969856811 +0000 UTC m=+9893.371245681" Dec 06 09:42:29 crc kubenswrapper[4895]: I1206 09:42:29.696549 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:42:29 crc kubenswrapper[4895]: I1206 09:42:29.697062 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:42:59 crc kubenswrapper[4895]: I1206 09:42:59.695997 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:42:59 crc kubenswrapper[4895]: I1206 09:42:59.696658 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:42:59 crc kubenswrapper[4895]: I1206 09:42:59.696710 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:42:59 crc kubenswrapper[4895]: I1206 09:42:59.697558 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2cdac50a6b93807e360cc9e6793f3ec3862d9912b78e2d09e34acdc1f9bc06d"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:42:59 crc kubenswrapper[4895]: I1206 09:42:59.697600 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://f2cdac50a6b93807e360cc9e6793f3ec3862d9912b78e2d09e34acdc1f9bc06d" gracePeriod=600 Dec 06 09:43:00 crc kubenswrapper[4895]: I1206 09:43:00.529716 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="f2cdac50a6b93807e360cc9e6793f3ec3862d9912b78e2d09e34acdc1f9bc06d" exitCode=0 Dec 06 09:43:00 crc kubenswrapper[4895]: I1206 09:43:00.529869 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"f2cdac50a6b93807e360cc9e6793f3ec3862d9912b78e2d09e34acdc1f9bc06d"} Dec 06 09:43:00 crc kubenswrapper[4895]: I1206 09:43:00.530244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e"} Dec 06 09:43:00 crc kubenswrapper[4895]: I1206 09:43:00.530266 4895 scope.go:117] "RemoveContainer" containerID="38039b33ef8f708726a2d804e1fc16a2c0c94a4f81bc2dd01d8910ff6ab6b992" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.378383 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmt42"] Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.381248 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.387766 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmt42"] Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.561985 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-catalog-content\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.562245 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-utilities\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.562350 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhjqq\" (UniqueName: \"kubernetes.io/projected/544859ab-a61b-4b86-ae45-462af1861b7c-kube-api-access-qhjqq\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.664554 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-utilities\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.664632 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhjqq\" (UniqueName: \"kubernetes.io/projected/544859ab-a61b-4b86-ae45-462af1861b7c-kube-api-access-qhjqq\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.664662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-catalog-content\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.665118 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-catalog-content\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.665361 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-utilities\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.703379 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhjqq\" (UniqueName: \"kubernetes.io/projected/544859ab-a61b-4b86-ae45-462af1861b7c-kube-api-access-qhjqq\") pod \"redhat-marketplace-zmt42\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:48 crc kubenswrapper[4895]: I1206 09:43:48.711876 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:49 crc kubenswrapper[4895]: I1206 09:43:49.274180 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmt42"] Dec 06 09:43:50 crc kubenswrapper[4895]: I1206 09:43:50.161177 4895 generic.go:334] "Generic (PLEG): container finished" podID="544859ab-a61b-4b86-ae45-462af1861b7c" containerID="06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1" exitCode=0 Dec 06 09:43:50 crc kubenswrapper[4895]: I1206 09:43:50.161240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerDied","Data":"06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1"} Dec 06 09:43:50 crc kubenswrapper[4895]: I1206 09:43:50.161530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerStarted","Data":"c6dfae45dddbe3e6c954fed642768766090f5e71fd88567beda94a5423dcdc1d"} Dec 06 09:43:51 crc kubenswrapper[4895]: I1206 09:43:51.171893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerStarted","Data":"67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579"} Dec 06 09:43:52 crc kubenswrapper[4895]: I1206 09:43:52.186040 4895 generic.go:334] "Generic (PLEG): container finished" podID="544859ab-a61b-4b86-ae45-462af1861b7c" containerID="67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579" exitCode=0 Dec 06 09:43:52 crc kubenswrapper[4895]: I1206 09:43:52.186136 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerDied","Data":"67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579"} Dec 06 09:43:53 crc kubenswrapper[4895]: I1206 09:43:53.198324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerStarted","Data":"d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40"} Dec 06 09:43:53 crc kubenswrapper[4895]: I1206 09:43:53.220128 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmt42" podStartSLOduration=2.823258653 podStartE2EDuration="5.220100497s" podCreationTimestamp="2025-12-06 09:43:48 +0000 UTC" firstStartedPulling="2025-12-06 09:43:50.165410067 +0000 UTC m=+9992.566798937" lastFinishedPulling="2025-12-06 09:43:52.562251881 +0000 UTC m=+9994.963640781" observedRunningTime="2025-12-06 09:43:53.215204655 +0000 UTC m=+9995.616593525" watchObservedRunningTime="2025-12-06 09:43:53.220100497 +0000 UTC m=+9995.621489387" Dec 06 09:43:58 crc kubenswrapper[4895]: I1206 09:43:58.712694 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:58 crc kubenswrapper[4895]: I1206 09:43:58.713340 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:58 crc kubenswrapper[4895]: I1206 09:43:58.761613 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:59 crc kubenswrapper[4895]: I1206 09:43:59.340262 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:43:59 crc kubenswrapper[4895]: I1206 09:43:59.404378 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmt42"] Dec 06 09:44:01 crc kubenswrapper[4895]: I1206 09:44:01.297558 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zmt42" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="registry-server" containerID="cri-o://d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40" gracePeriod=2 Dec 06 09:44:01 crc kubenswrapper[4895]: I1206 09:44:01.905588 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.048621 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhjqq\" (UniqueName: \"kubernetes.io/projected/544859ab-a61b-4b86-ae45-462af1861b7c-kube-api-access-qhjqq\") pod \"544859ab-a61b-4b86-ae45-462af1861b7c\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.048814 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-utilities\") pod \"544859ab-a61b-4b86-ae45-462af1861b7c\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.048840 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-catalog-content\") pod \"544859ab-a61b-4b86-ae45-462af1861b7c\" (UID: \"544859ab-a61b-4b86-ae45-462af1861b7c\") " Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.049769 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-utilities" (OuterVolumeSpecName: "utilities") pod "544859ab-a61b-4b86-ae45-462af1861b7c" (UID: "544859ab-a61b-4b86-ae45-462af1861b7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.057498 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544859ab-a61b-4b86-ae45-462af1861b7c-kube-api-access-qhjqq" (OuterVolumeSpecName: "kube-api-access-qhjqq") pod "544859ab-a61b-4b86-ae45-462af1861b7c" (UID: "544859ab-a61b-4b86-ae45-462af1861b7c"). InnerVolumeSpecName "kube-api-access-qhjqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.069450 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "544859ab-a61b-4b86-ae45-462af1861b7c" (UID: "544859ab-a61b-4b86-ae45-462af1861b7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.151943 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.151984 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544859ab-a61b-4b86-ae45-462af1861b7c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.152003 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhjqq\" (UniqueName: \"kubernetes.io/projected/544859ab-a61b-4b86-ae45-462af1861b7c-kube-api-access-qhjqq\") on node \"crc\" DevicePath \"\"" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.313143 4895 generic.go:334] "Generic (PLEG): container finished" podID="544859ab-a61b-4b86-ae45-462af1861b7c" containerID="d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40" exitCode=0 Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.313245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerDied","Data":"d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40"} Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.313275 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmt42" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.313380 4895 scope.go:117] "RemoveContainer" containerID="d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.313335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmt42" event={"ID":"544859ab-a61b-4b86-ae45-462af1861b7c","Type":"ContainerDied","Data":"c6dfae45dddbe3e6c954fed642768766090f5e71fd88567beda94a5423dcdc1d"} Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.338716 4895 scope.go:117] "RemoveContainer" containerID="67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.367151 4895 scope.go:117] "RemoveContainer" containerID="06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.374309 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmt42"] Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.384227 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmt42"] Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.404070 4895 scope.go:117] "RemoveContainer" containerID="d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40" Dec 06 09:44:02 crc kubenswrapper[4895]: E1206 09:44:02.404561 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40\": container with ID starting with d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40 not found: ID does not exist" containerID="d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.404602 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40"} err="failed to get container status \"d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40\": rpc error: code = NotFound desc = could not find container \"d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40\": container with ID starting with d8eeca582937ae1a5367c505a2c1e87b4ceb3e6ecb59b6ceee701d0fbcd12a40 not found: ID does not exist" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.404624 4895 scope.go:117] "RemoveContainer" containerID="67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579" Dec 06 09:44:02 crc kubenswrapper[4895]: E1206 09:44:02.404891 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579\": container with ID starting with 67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579 not found: ID does not exist" containerID="67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.404916 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579"} err="failed to get container status \"67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579\": rpc error: code = NotFound desc = could not find container \"67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579\": container with ID starting with 67b321178fd1f923774be479b44b279d0215bf9f2200dd85fd40ded1de7fc579 not found: ID does not exist" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.404933 4895 scope.go:117] "RemoveContainer" containerID="06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1" Dec 06 09:44:02 crc kubenswrapper[4895]: E1206 09:44:02.405175 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1\": container with ID starting with 06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1 not found: ID does not exist" containerID="06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1" Dec 06 09:44:02 crc kubenswrapper[4895]: I1206 09:44:02.405205 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1"} err="failed to get container status \"06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1\": rpc error: code = NotFound desc = could not find container \"06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1\": container with ID starting with 06a748028e8fe1549b9dee68ac9286f4955ef4e3214b74787489bd23280d05d1 not found: ID does not exist" Dec 06 09:44:04 crc kubenswrapper[4895]: I1206 09:44:04.061537 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" path="/var/lib/kubelet/pods/544859ab-a61b-4b86-ae45-462af1861b7c/volumes" Dec 06 09:44:21 crc kubenswrapper[4895]: I1206 09:44:21.131405 4895 scope.go:117] "RemoveContainer" containerID="31f01a3dd63e7dca5719dece3adab4aa9d1f21c1632652f7476cdc5f00ee1f88" Dec 06 09:44:21 crc kubenswrapper[4895]: I1206 09:44:21.170975 4895 scope.go:117] "RemoveContainer" containerID="0e0fcc3c15a3a74bc143c2927b1089dd5318fd649f329dd0c8a81076d19fffad" Dec 06 09:44:21 crc kubenswrapper[4895]: I1206 09:44:21.248182 4895 scope.go:117] "RemoveContainer" containerID="c5b00b6592d2e03d3b287a01bb4bbed3e43b0cde7b52b3be07378eb08c97c686" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.177915 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn"] Dec 06 09:45:00 crc kubenswrapper[4895]: E1206 09:45:00.179150 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="extract-content" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.179167 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="extract-content" Dec 06 09:45:00 crc kubenswrapper[4895]: E1206 09:45:00.179187 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="registry-server" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.179194 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="registry-server" Dec 06 09:45:00 crc kubenswrapper[4895]: E1206 09:45:00.179212 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="extract-utilities" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.179222 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="extract-utilities" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.179448 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="544859ab-a61b-4b86-ae45-462af1861b7c" containerName="registry-server" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.180431 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.183353 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.183677 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.201776 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn"] Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.238589 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7jl\" (UniqueName: \"kubernetes.io/projected/699a968e-8c28-41c0-a326-31306d4cbab6-kube-api-access-8x7jl\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.238832 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/699a968e-8c28-41c0-a326-31306d4cbab6-secret-volume\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.238910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699a968e-8c28-41c0-a326-31306d4cbab6-config-volume\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.341906 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/699a968e-8c28-41c0-a326-31306d4cbab6-secret-volume\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.341999 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699a968e-8c28-41c0-a326-31306d4cbab6-config-volume\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.342138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7jl\" (UniqueName: \"kubernetes.io/projected/699a968e-8c28-41c0-a326-31306d4cbab6-kube-api-access-8x7jl\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.343019 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699a968e-8c28-41c0-a326-31306d4cbab6-config-volume\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.352310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/699a968e-8c28-41c0-a326-31306d4cbab6-secret-volume\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.358998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7jl\" (UniqueName: \"kubernetes.io/projected/699a968e-8c28-41c0-a326-31306d4cbab6-kube-api-access-8x7jl\") pod \"collect-profiles-29416905-mfgnn\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:00 crc kubenswrapper[4895]: I1206 09:45:00.556030 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:01 crc kubenswrapper[4895]: I1206 09:45:01.017878 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn"] Dec 06 09:45:02 crc kubenswrapper[4895]: I1206 09:45:02.087081 4895 generic.go:334] "Generic (PLEG): container finished" podID="699a968e-8c28-41c0-a326-31306d4cbab6" containerID="d90d81014beba77f9e6cb53066bddbfbb60057498effe571874292fd5399de3d" exitCode=0 Dec 06 09:45:02 crc kubenswrapper[4895]: I1206 09:45:02.087187 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" event={"ID":"699a968e-8c28-41c0-a326-31306d4cbab6","Type":"ContainerDied","Data":"d90d81014beba77f9e6cb53066bddbfbb60057498effe571874292fd5399de3d"} Dec 06 09:45:02 crc kubenswrapper[4895]: I1206 09:45:02.087621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" event={"ID":"699a968e-8c28-41c0-a326-31306d4cbab6","Type":"ContainerStarted","Data":"2dd2fb61245b9e51a8094601947bff4d73403b30e5533578163a85a10dddc125"} Dec 06 09:45:03 crc kubenswrapper[4895]: I1206 09:45:03.919953 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.024373 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x7jl\" (UniqueName: \"kubernetes.io/projected/699a968e-8c28-41c0-a326-31306d4cbab6-kube-api-access-8x7jl\") pod \"699a968e-8c28-41c0-a326-31306d4cbab6\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.024446 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/699a968e-8c28-41c0-a326-31306d4cbab6-secret-volume\") pod \"699a968e-8c28-41c0-a326-31306d4cbab6\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.025114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699a968e-8c28-41c0-a326-31306d4cbab6-config-volume\") pod \"699a968e-8c28-41c0-a326-31306d4cbab6\" (UID: \"699a968e-8c28-41c0-a326-31306d4cbab6\") " Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.025288 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699a968e-8c28-41c0-a326-31306d4cbab6-config-volume" (OuterVolumeSpecName: "config-volume") pod "699a968e-8c28-41c0-a326-31306d4cbab6" (UID: "699a968e-8c28-41c0-a326-31306d4cbab6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.025925 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699a968e-8c28-41c0-a326-31306d4cbab6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.036892 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699a968e-8c28-41c0-a326-31306d4cbab6-kube-api-access-8x7jl" (OuterVolumeSpecName: "kube-api-access-8x7jl") pod "699a968e-8c28-41c0-a326-31306d4cbab6" (UID: "699a968e-8c28-41c0-a326-31306d4cbab6"). InnerVolumeSpecName "kube-api-access-8x7jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.036999 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699a968e-8c28-41c0-a326-31306d4cbab6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "699a968e-8c28-41c0-a326-31306d4cbab6" (UID: "699a968e-8c28-41c0-a326-31306d4cbab6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.111520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" event={"ID":"699a968e-8c28-41c0-a326-31306d4cbab6","Type":"ContainerDied","Data":"2dd2fb61245b9e51a8094601947bff4d73403b30e5533578163a85a10dddc125"} Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.111813 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd2fb61245b9e51a8094601947bff4d73403b30e5533578163a85a10dddc125" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.111944 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.128936 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x7jl\" (UniqueName: \"kubernetes.io/projected/699a968e-8c28-41c0-a326-31306d4cbab6-kube-api-access-8x7jl\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:04 crc kubenswrapper[4895]: I1206 09:45:04.131819 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/699a968e-8c28-41c0-a326-31306d4cbab6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:05 crc kubenswrapper[4895]: I1206 09:45:05.019776 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch"] Dec 06 09:45:05 crc kubenswrapper[4895]: I1206 09:45:05.028204 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416860-x5fch"] Dec 06 09:45:06 crc kubenswrapper[4895]: I1206 09:45:06.069866 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d68b1c1-732c-476b-a7a2-44199c7d62b5" path="/var/lib/kubelet/pods/5d68b1c1-732c-476b-a7a2-44199c7d62b5/volumes" Dec 06 09:45:21 crc kubenswrapper[4895]: I1206 09:45:21.379438 4895 scope.go:117] "RemoveContainer" containerID="80c6ba5685fa31bfcfcad1270b79c54d569f907991bac6998b23e681034e871b" Dec 06 09:45:29 crc kubenswrapper[4895]: I1206 09:45:29.696288 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:45:29 crc kubenswrapper[4895]: I1206 09:45:29.697193 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:45:57 crc kubenswrapper[4895]: I1206 09:45:57.884631 4895 generic.go:334] "Generic (PLEG): container finished" podID="17bdd4b3-f731-4190-9958-689486a88f30" containerID="a6db31438557a5e3e5c48bd4ac3fe6219bed998c461011978cbe80f1360db0d3" exitCode=0 Dec 06 09:45:57 crc kubenswrapper[4895]: I1206 09:45:57.885392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" event={"ID":"17bdd4b3-f731-4190-9958-689486a88f30","Type":"ContainerDied","Data":"a6db31438557a5e3e5c48bd4ac3fe6219bed998c461011978cbe80f1360db0d3"} Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.491553 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.588442 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-1\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.589988 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxn9c\" (UniqueName: \"kubernetes.io/projected/17bdd4b3-f731-4190-9958-689486a88f30-kube-api-access-kxn9c\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.590228 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-1\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.590321 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ceph\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.590370 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-0\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.590434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-1\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.590558 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-0\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.590596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-combined-ca-bundle\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.591154 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ssh-key\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.591213 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-inventory\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.591251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-0\") pod \"17bdd4b3-f731-4190-9958-689486a88f30\" (UID: \"17bdd4b3-f731-4190-9958-689486a88f30\") " Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.596001 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ceph" (OuterVolumeSpecName: "ceph") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.596092 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bdd4b3-f731-4190-9958-689486a88f30-kube-api-access-kxn9c" (OuterVolumeSpecName: "kube-api-access-kxn9c") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "kube-api-access-kxn9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.620525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.622066 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.625132 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.636140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-inventory" (OuterVolumeSpecName: "inventory") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.644746 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.646693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.647861 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.648296 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.654316 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "17bdd4b3-f731-4190-9958-689486a88f30" (UID: "17bdd4b3-f731-4190-9958-689486a88f30"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693660 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693703 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693717 4895 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693734 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693744 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxn9c\" (UniqueName: \"kubernetes.io/projected/17bdd4b3-f731-4190-9958-689486a88f30-kube-api-access-kxn9c\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693754 4895 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693765 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693888 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/17bdd4b3-f731-4190-9958-689486a88f30-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693907 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693920 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.693933 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bdd4b3-f731-4190-9958-689486a88f30-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.695932 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.696231 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.914299 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" event={"ID":"17bdd4b3-f731-4190-9958-689486a88f30","Type":"ContainerDied","Data":"e325068311f24e6c61eb2deb9a1804287381255ce3c9faf7b0894b526e560acd"} Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.914368 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e325068311f24e6c61eb2deb9a1804287381255ce3c9faf7b0894b526e560acd" Dec 06 09:45:59 crc kubenswrapper[4895]: I1206 09:45:59.914374 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-gq6jn" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.072421 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-k85hm"] Dec 06 09:46:00 crc kubenswrapper[4895]: E1206 09:46:00.072857 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699a968e-8c28-41c0-a326-31306d4cbab6" containerName="collect-profiles" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.072875 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="699a968e-8c28-41c0-a326-31306d4cbab6" containerName="collect-profiles" Dec 06 09:46:00 crc kubenswrapper[4895]: E1206 09:46:00.072898 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bdd4b3-f731-4190-9958-689486a88f30" containerName="nova-cell1-openstack-openstack-cell1" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.072908 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bdd4b3-f731-4190-9958-689486a88f30" containerName="nova-cell1-openstack-openstack-cell1" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.073176 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bdd4b3-f731-4190-9958-689486a88f30" containerName="nova-cell1-openstack-openstack-cell1" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.073227 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="699a968e-8c28-41c0-a326-31306d4cbab6" containerName="collect-profiles" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.074358 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.076461 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.076494 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.076820 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.076764 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.077111 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.098739 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-k85hm"] Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.203929 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceph\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204066 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ssh-key\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204086 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204168 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzk8c\" (UniqueName: \"kubernetes.io/projected/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-kube-api-access-vzk8c\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204224 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.204259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-inventory\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceph\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306342 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ssh-key\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306361 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306425 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzk8c\" (UniqueName: \"kubernetes.io/projected/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-kube-api-access-vzk8c\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306458 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.306566 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-inventory\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.310366 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.311100 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceph\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.311284 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.311651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.311948 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.313050 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-inventory\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.313668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ssh-key\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.326279 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzk8c\" (UniqueName: \"kubernetes.io/projected/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-kube-api-access-vzk8c\") pod \"telemetry-openstack-openstack-cell1-k85hm\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.401057 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:46:00 crc kubenswrapper[4895]: I1206 09:46:00.941687 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-k85hm"] Dec 06 09:46:00 crc kubenswrapper[4895]: W1206 09:46:00.946700 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d07278_8ff8_402b_8a7b_b2d05efc68fd.slice/crio-c307e4d5fa8fb0fd50d73063e34a30ce5a41bd1a4d5c5bf9e184abf8dec7757a WatchSource:0}: Error finding container c307e4d5fa8fb0fd50d73063e34a30ce5a41bd1a4d5c5bf9e184abf8dec7757a: Status 404 returned error can't find the container with id c307e4d5fa8fb0fd50d73063e34a30ce5a41bd1a4d5c5bf9e184abf8dec7757a Dec 06 09:46:01 crc kubenswrapper[4895]: I1206 09:46:01.950658 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" event={"ID":"b7d07278-8ff8-402b-8a7b-b2d05efc68fd","Type":"ContainerStarted","Data":"8864637c357164b9f8cdef79d09cf342bedd651830add867a1e5f612bc398db6"} Dec 06 09:46:01 crc kubenswrapper[4895]: I1206 09:46:01.951102 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" event={"ID":"b7d07278-8ff8-402b-8a7b-b2d05efc68fd","Type":"ContainerStarted","Data":"c307e4d5fa8fb0fd50d73063e34a30ce5a41bd1a4d5c5bf9e184abf8dec7757a"} Dec 06 09:46:01 crc kubenswrapper[4895]: I1206 09:46:01.996614 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" podStartSLOduration=1.533177107 podStartE2EDuration="1.996592544s" podCreationTimestamp="2025-12-06 09:46:00 +0000 UTC" firstStartedPulling="2025-12-06 09:46:00.948967215 +0000 UTC m=+10123.350356095" lastFinishedPulling="2025-12-06 09:46:01.412382652 +0000 UTC m=+10123.813771532" observedRunningTime="2025-12-06 09:46:01.985778413 +0000 UTC m=+10124.387167283" watchObservedRunningTime="2025-12-06 09:46:01.996592544 +0000 UTC m=+10124.397981414" Dec 06 09:46:29 crc kubenswrapper[4895]: I1206 09:46:29.696250 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:46:29 crc kubenswrapper[4895]: I1206 09:46:29.696858 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:46:29 crc kubenswrapper[4895]: I1206 09:46:29.696919 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:46:29 crc kubenswrapper[4895]: I1206 09:46:29.697880 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:46:29 crc kubenswrapper[4895]: I1206 09:46:29.697951 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" gracePeriod=600 Dec 06 09:46:29 crc kubenswrapper[4895]: E1206 09:46:29.828901 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:46:30 crc kubenswrapper[4895]: I1206 09:46:30.441263 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" exitCode=0 Dec 06 09:46:30 crc kubenswrapper[4895]: I1206 09:46:30.441310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e"} Dec 06 09:46:30 crc kubenswrapper[4895]: I1206 09:46:30.441371 4895 scope.go:117] "RemoveContainer" containerID="f2cdac50a6b93807e360cc9e6793f3ec3862d9912b78e2d09e34acdc1f9bc06d" Dec 06 09:46:30 crc kubenswrapper[4895]: I1206 09:46:30.442148 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:46:30 crc kubenswrapper[4895]: E1206 09:46:30.442436 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:46:42 crc kubenswrapper[4895]: I1206 09:46:42.051377 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:46:42 crc kubenswrapper[4895]: E1206 09:46:42.051996 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:46:53 crc kubenswrapper[4895]: I1206 09:46:53.051538 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:46:53 crc kubenswrapper[4895]: E1206 09:46:53.052314 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:47:05 crc kubenswrapper[4895]: I1206 09:47:05.051651 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:47:05 crc kubenswrapper[4895]: E1206 09:47:05.052347 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:47:16 crc kubenswrapper[4895]: I1206 09:47:16.052465 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:47:16 crc kubenswrapper[4895]: E1206 09:47:16.053762 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:47:30 crc kubenswrapper[4895]: I1206 09:47:30.051318 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:47:30 crc kubenswrapper[4895]: E1206 09:47:30.052607 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:47:44 crc kubenswrapper[4895]: I1206 09:47:44.051394 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:47:44 crc kubenswrapper[4895]: E1206 09:47:44.052559 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:47:56 crc kubenswrapper[4895]: I1206 09:47:56.053295 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:47:56 crc kubenswrapper[4895]: E1206 09:47:56.054574 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:48:07 crc kubenswrapper[4895]: I1206 09:48:07.051012 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:48:07 crc kubenswrapper[4895]: E1206 09:48:07.051896 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:48:15 crc kubenswrapper[4895]: I1206 09:48:15.876631 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcjtr"] Dec 06 09:48:15 crc kubenswrapper[4895]: I1206 09:48:15.881755 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:15 crc kubenswrapper[4895]: I1206 09:48:15.896332 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcjtr"] Dec 06 09:48:15 crc kubenswrapper[4895]: I1206 09:48:15.987754 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26svb\" (UniqueName: \"kubernetes.io/projected/90d6b13c-c68d-4544-8156-058f6c0dd66d-kube-api-access-26svb\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:15 crc kubenswrapper[4895]: I1206 09:48:15.988236 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-catalog-content\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:15 crc kubenswrapper[4895]: I1206 09:48:15.988766 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-utilities\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.092434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26svb\" (UniqueName: \"kubernetes.io/projected/90d6b13c-c68d-4544-8156-058f6c0dd66d-kube-api-access-26svb\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.093001 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-catalog-content\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.093309 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-utilities\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.093718 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-catalog-content\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.094095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-utilities\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.130895 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26svb\" (UniqueName: \"kubernetes.io/projected/90d6b13c-c68d-4544-8156-058f6c0dd66d-kube-api-access-26svb\") pod \"certified-operators-vcjtr\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.215679 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.753628 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcjtr"] Dec 06 09:48:16 crc kubenswrapper[4895]: I1206 09:48:16.797457 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerStarted","Data":"a47a2c7854c9a4543a08c3f6e36ec71356be37baa67a0fee626687065be2a35f"} Dec 06 09:48:17 crc kubenswrapper[4895]: I1206 09:48:17.811029 4895 generic.go:334] "Generic (PLEG): container finished" podID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerID="ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060" exitCode=0 Dec 06 09:48:17 crc kubenswrapper[4895]: I1206 09:48:17.811078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerDied","Data":"ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060"} Dec 06 09:48:17 crc kubenswrapper[4895]: I1206 09:48:17.815883 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:48:18 crc kubenswrapper[4895]: I1206 09:48:18.821306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerStarted","Data":"d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57"} Dec 06 09:48:19 crc kubenswrapper[4895]: I1206 09:48:19.836359 4895 generic.go:334] "Generic (PLEG): container finished" podID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerID="d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57" exitCode=0 Dec 06 09:48:19 crc kubenswrapper[4895]: I1206 09:48:19.836438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerDied","Data":"d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57"} Dec 06 09:48:20 crc kubenswrapper[4895]: I1206 09:48:20.854273 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerStarted","Data":"0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a"} Dec 06 09:48:20 crc kubenswrapper[4895]: I1206 09:48:20.890890 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcjtr" podStartSLOduration=3.4422347589999998 podStartE2EDuration="5.890836128s" podCreationTimestamp="2025-12-06 09:48:15 +0000 UTC" firstStartedPulling="2025-12-06 09:48:17.815563883 +0000 UTC m=+10260.216952763" lastFinishedPulling="2025-12-06 09:48:20.264165232 +0000 UTC m=+10262.665554132" observedRunningTime="2025-12-06 09:48:20.881787354 +0000 UTC m=+10263.283176234" watchObservedRunningTime="2025-12-06 09:48:20.890836128 +0000 UTC m=+10263.292225028" Dec 06 09:48:22 crc kubenswrapper[4895]: I1206 09:48:22.052128 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:48:22 crc kubenswrapper[4895]: E1206 09:48:22.052705 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:48:26 crc kubenswrapper[4895]: I1206 09:48:26.216110 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:26 crc kubenswrapper[4895]: I1206 09:48:26.216721 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:26 crc kubenswrapper[4895]: I1206 09:48:26.297612 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:27 crc kubenswrapper[4895]: I1206 09:48:27.795419 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:27 crc kubenswrapper[4895]: I1206 09:48:27.860150 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcjtr"] Dec 06 09:48:28 crc kubenswrapper[4895]: I1206 09:48:28.968424 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcjtr" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="registry-server" containerID="cri-o://0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a" gracePeriod=2 Dec 06 09:48:29 crc kubenswrapper[4895]: E1206 09:48:29.069063 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d6b13c_c68d_4544_8156_058f6c0dd66d.slice/crio-conmon-0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a.scope\": RecentStats: unable to find data in memory cache]" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.428772 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.545422 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26svb\" (UniqueName: \"kubernetes.io/projected/90d6b13c-c68d-4544-8156-058f6c0dd66d-kube-api-access-26svb\") pod \"90d6b13c-c68d-4544-8156-058f6c0dd66d\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.545497 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-catalog-content\") pod \"90d6b13c-c68d-4544-8156-058f6c0dd66d\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.545790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-utilities\") pod \"90d6b13c-c68d-4544-8156-058f6c0dd66d\" (UID: \"90d6b13c-c68d-4544-8156-058f6c0dd66d\") " Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.546527 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-utilities" (OuterVolumeSpecName: "utilities") pod "90d6b13c-c68d-4544-8156-058f6c0dd66d" (UID: "90d6b13c-c68d-4544-8156-058f6c0dd66d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.559818 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d6b13c-c68d-4544-8156-058f6c0dd66d-kube-api-access-26svb" (OuterVolumeSpecName: "kube-api-access-26svb") pod "90d6b13c-c68d-4544-8156-058f6c0dd66d" (UID: "90d6b13c-c68d-4544-8156-058f6c0dd66d"). InnerVolumeSpecName "kube-api-access-26svb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.621410 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90d6b13c-c68d-4544-8156-058f6c0dd66d" (UID: "90d6b13c-c68d-4544-8156-058f6c0dd66d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.648403 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.648438 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26svb\" (UniqueName: \"kubernetes.io/projected/90d6b13c-c68d-4544-8156-058f6c0dd66d-kube-api-access-26svb\") on node \"crc\" DevicePath \"\"" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.648450 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6b13c-c68d-4544-8156-058f6c0dd66d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.979564 4895 generic.go:334] "Generic (PLEG): container finished" podID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerID="0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a" exitCode=0 Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.979629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerDied","Data":"0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a"} Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.980668 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcjtr" event={"ID":"90d6b13c-c68d-4544-8156-058f6c0dd66d","Type":"ContainerDied","Data":"a47a2c7854c9a4543a08c3f6e36ec71356be37baa67a0fee626687065be2a35f"} Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.980703 4895 scope.go:117] "RemoveContainer" containerID="0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a" Dec 06 09:48:29 crc kubenswrapper[4895]: I1206 09:48:29.979682 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcjtr" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.016576 4895 scope.go:117] "RemoveContainer" containerID="d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.035585 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcjtr"] Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.044804 4895 scope.go:117] "RemoveContainer" containerID="ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.077151 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcjtr"] Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.105274 4895 scope.go:117] "RemoveContainer" containerID="0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a" Dec 06 09:48:30 crc kubenswrapper[4895]: E1206 09:48:30.105952 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a\": container with ID starting with 0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a not found: ID does not exist" containerID="0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.106059 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a"} err="failed to get container status \"0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a\": rpc error: code = NotFound desc = could not find container \"0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a\": container with ID starting with 0ba3ffc320cb24da8e2ab222bccba284bf331a2a1827a39edddfd11a6325c61a not found: ID does not exist" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.106100 4895 scope.go:117] "RemoveContainer" containerID="d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57" Dec 06 09:48:30 crc kubenswrapper[4895]: E1206 09:48:30.106620 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57\": container with ID starting with d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57 not found: ID does not exist" containerID="d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.106657 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57"} err="failed to get container status \"d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57\": rpc error: code = NotFound desc = could not find container \"d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57\": container with ID starting with d10893a05613c2aa95030f40257cfb942aebceecd5da82b739fe47eb2fe2da57 not found: ID does not exist" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.106677 4895 scope.go:117] "RemoveContainer" containerID="ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060" Dec 06 09:48:30 crc kubenswrapper[4895]: E1206 09:48:30.106947 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060\": container with ID starting with ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060 not found: ID does not exist" containerID="ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060" Dec 06 09:48:30 crc kubenswrapper[4895]: I1206 09:48:30.106985 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060"} err="failed to get container status \"ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060\": rpc error: code = NotFound desc = could not find container \"ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060\": container with ID starting with ea7d4353caf0c224dc605b76e05caa46c809369948bb3bbb795a115f96ec0060 not found: ID does not exist" Dec 06 09:48:32 crc kubenswrapper[4895]: I1206 09:48:32.078847 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" path="/var/lib/kubelet/pods/90d6b13c-c68d-4544-8156-058f6c0dd66d/volumes" Dec 06 09:48:35 crc kubenswrapper[4895]: I1206 09:48:35.051233 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:48:35 crc kubenswrapper[4895]: E1206 09:48:35.052546 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:48:50 crc kubenswrapper[4895]: I1206 09:48:50.051405 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:48:50 crc kubenswrapper[4895]: E1206 09:48:50.052192 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:49:02 crc kubenswrapper[4895]: I1206 09:49:02.051025 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:49:02 crc kubenswrapper[4895]: E1206 09:49:02.052215 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:49:13 crc kubenswrapper[4895]: I1206 09:49:13.051628 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:49:13 crc kubenswrapper[4895]: E1206 09:49:13.054371 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:49:24 crc kubenswrapper[4895]: I1206 09:49:24.050147 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:49:24 crc kubenswrapper[4895]: E1206 09:49:24.050905 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:49:37 crc kubenswrapper[4895]: I1206 09:49:37.050983 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:49:37 crc kubenswrapper[4895]: E1206 09:49:37.051937 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:49:48 crc kubenswrapper[4895]: I1206 09:49:48.058939 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:49:48 crc kubenswrapper[4895]: E1206 09:49:48.061022 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.347012 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxpjx"] Dec 06 09:49:56 crc kubenswrapper[4895]: E1206 09:49:56.348159 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="extract-content" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.348181 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="extract-content" Dec 06 09:49:56 crc kubenswrapper[4895]: E1206 09:49:56.348198 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="extract-utilities" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.348206 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="extract-utilities" Dec 06 09:49:56 crc kubenswrapper[4895]: E1206 09:49:56.348245 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="registry-server" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.348254 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="registry-server" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.348602 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d6b13c-c68d-4544-8156-058f6c0dd66d" containerName="registry-server" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.350585 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.373448 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxpjx"] Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.527603 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-utilities\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.527648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7986\" (UniqueName: \"kubernetes.io/projected/c458d8ad-b448-40e2-ada7-b46e69922c8f-kube-api-access-h7986\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.527675 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-catalog-content\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.633044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-utilities\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.633111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7986\" (UniqueName: \"kubernetes.io/projected/c458d8ad-b448-40e2-ada7-b46e69922c8f-kube-api-access-h7986\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.633145 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-catalog-content\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.633634 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-utilities\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.633818 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-catalog-content\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.662148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7986\" (UniqueName: \"kubernetes.io/projected/c458d8ad-b448-40e2-ada7-b46e69922c8f-kube-api-access-h7986\") pod \"community-operators-wxpjx\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:56 crc kubenswrapper[4895]: I1206 09:49:56.696027 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:49:57 crc kubenswrapper[4895]: W1206 09:49:57.269419 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc458d8ad_b448_40e2_ada7_b46e69922c8f.slice/crio-73b154b2b581b8244ff5489b1dac5a222582b585ae0ced123b4ea2cfe19786ee WatchSource:0}: Error finding container 73b154b2b581b8244ff5489b1dac5a222582b585ae0ced123b4ea2cfe19786ee: Status 404 returned error can't find the container with id 73b154b2b581b8244ff5489b1dac5a222582b585ae0ced123b4ea2cfe19786ee Dec 06 09:49:57 crc kubenswrapper[4895]: I1206 09:49:57.274994 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxpjx"] Dec 06 09:49:58 crc kubenswrapper[4895]: I1206 09:49:58.268665 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxpjx" event={"ID":"c458d8ad-b448-40e2-ada7-b46e69922c8f","Type":"ContainerStarted","Data":"73b154b2b581b8244ff5489b1dac5a222582b585ae0ced123b4ea2cfe19786ee"} Dec 06 09:49:59 crc kubenswrapper[4895]: I1206 09:49:59.281523 4895 generic.go:334] "Generic (PLEG): container finished" podID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerID="e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96" exitCode=0 Dec 06 09:49:59 crc kubenswrapper[4895]: I1206 09:49:59.281609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxpjx" event={"ID":"c458d8ad-b448-40e2-ada7-b46e69922c8f","Type":"ContainerDied","Data":"e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96"} Dec 06 09:50:01 crc kubenswrapper[4895]: I1206 09:50:01.313827 4895 generic.go:334] "Generic (PLEG): container finished" podID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerID="c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038" exitCode=0 Dec 06 09:50:01 crc kubenswrapper[4895]: I1206 09:50:01.315863 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxpjx" event={"ID":"c458d8ad-b448-40e2-ada7-b46e69922c8f","Type":"ContainerDied","Data":"c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038"} Dec 06 09:50:02 crc kubenswrapper[4895]: I1206 09:50:02.050810 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:50:02 crc kubenswrapper[4895]: E1206 09:50:02.051772 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:50:02 crc kubenswrapper[4895]: I1206 09:50:02.328705 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxpjx" event={"ID":"c458d8ad-b448-40e2-ada7-b46e69922c8f","Type":"ContainerStarted","Data":"4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32"} Dec 06 09:50:02 crc kubenswrapper[4895]: I1206 09:50:02.369740 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxpjx" podStartSLOduration=3.914274354 podStartE2EDuration="6.369718928s" podCreationTimestamp="2025-12-06 09:49:56 +0000 UTC" firstStartedPulling="2025-12-06 09:49:59.28396004 +0000 UTC m=+10361.685348910" lastFinishedPulling="2025-12-06 09:50:01.739404574 +0000 UTC m=+10364.140793484" observedRunningTime="2025-12-06 09:50:02.350929142 +0000 UTC m=+10364.752318052" watchObservedRunningTime="2025-12-06 09:50:02.369718928 +0000 UTC m=+10364.771107808" Dec 06 09:50:06 crc kubenswrapper[4895]: I1206 09:50:06.697238 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:50:06 crc kubenswrapper[4895]: I1206 09:50:06.697862 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:50:06 crc kubenswrapper[4895]: I1206 09:50:06.747912 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:50:07 crc kubenswrapper[4895]: I1206 09:50:07.484995 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:50:08 crc kubenswrapper[4895]: I1206 09:50:08.199175 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxpjx"] Dec 06 09:50:09 crc kubenswrapper[4895]: I1206 09:50:09.421021 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxpjx" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="registry-server" containerID="cri-o://4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32" gracePeriod=2 Dec 06 09:50:09 crc kubenswrapper[4895]: I1206 09:50:09.932253 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.051842 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7986\" (UniqueName: \"kubernetes.io/projected/c458d8ad-b448-40e2-ada7-b46e69922c8f-kube-api-access-h7986\") pod \"c458d8ad-b448-40e2-ada7-b46e69922c8f\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.051942 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-catalog-content\") pod \"c458d8ad-b448-40e2-ada7-b46e69922c8f\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.052313 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-utilities\") pod \"c458d8ad-b448-40e2-ada7-b46e69922c8f\" (UID: \"c458d8ad-b448-40e2-ada7-b46e69922c8f\") " Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.053524 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-utilities" (OuterVolumeSpecName: "utilities") pod "c458d8ad-b448-40e2-ada7-b46e69922c8f" (UID: "c458d8ad-b448-40e2-ada7-b46e69922c8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.059853 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c458d8ad-b448-40e2-ada7-b46e69922c8f-kube-api-access-h7986" (OuterVolumeSpecName: "kube-api-access-h7986") pod "c458d8ad-b448-40e2-ada7-b46e69922c8f" (UID: "c458d8ad-b448-40e2-ada7-b46e69922c8f"). InnerVolumeSpecName "kube-api-access-h7986". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.114313 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c458d8ad-b448-40e2-ada7-b46e69922c8f" (UID: "c458d8ad-b448-40e2-ada7-b46e69922c8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.156279 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.156316 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458d8ad-b448-40e2-ada7-b46e69922c8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.156329 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7986\" (UniqueName: \"kubernetes.io/projected/c458d8ad-b448-40e2-ada7-b46e69922c8f-kube-api-access-h7986\") on node \"crc\" DevicePath \"\"" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.438926 4895 generic.go:334] "Generic (PLEG): container finished" podID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerID="4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32" exitCode=0 Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.438999 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxpjx" event={"ID":"c458d8ad-b448-40e2-ada7-b46e69922c8f","Type":"ContainerDied","Data":"4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32"} Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.439053 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxpjx" event={"ID":"c458d8ad-b448-40e2-ada7-b46e69922c8f","Type":"ContainerDied","Data":"73b154b2b581b8244ff5489b1dac5a222582b585ae0ced123b4ea2cfe19786ee"} Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.439084 4895 scope.go:117] "RemoveContainer" containerID="4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.439075 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxpjx" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.480818 4895 scope.go:117] "RemoveContainer" containerID="c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.493791 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxpjx"] Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.503985 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxpjx"] Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.526024 4895 scope.go:117] "RemoveContainer" containerID="e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.561637 4895 scope.go:117] "RemoveContainer" containerID="4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32" Dec 06 09:50:10 crc kubenswrapper[4895]: E1206 09:50:10.562102 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32\": container with ID starting with 4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32 not found: ID does not exist" containerID="4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.562141 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32"} err="failed to get container status \"4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32\": rpc error: code = NotFound desc = could not find container \"4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32\": container with ID starting with 4487aa2b2473e56e896497daadaa36e900a024c80a4ac7168d1663450796ee32 not found: ID does not exist" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.562164 4895 scope.go:117] "RemoveContainer" containerID="c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038" Dec 06 09:50:10 crc kubenswrapper[4895]: E1206 09:50:10.562680 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038\": container with ID starting with c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038 not found: ID does not exist" containerID="c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.562738 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038"} err="failed to get container status \"c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038\": rpc error: code = NotFound desc = could not find container \"c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038\": container with ID starting with c17cd7ef6843a7fadb1218e3665693dcdad04ae3c1d21e9b97e70ba460d02038 not found: ID does not exist" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.562774 4895 scope.go:117] "RemoveContainer" containerID="e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96" Dec 06 09:50:10 crc kubenswrapper[4895]: E1206 09:50:10.563154 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96\": container with ID starting with e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96 not found: ID does not exist" containerID="e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96" Dec 06 09:50:10 crc kubenswrapper[4895]: I1206 09:50:10.563188 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96"} err="failed to get container status \"e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96\": rpc error: code = NotFound desc = could not find container \"e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96\": container with ID starting with e1785596d4b5a0b349027e3e435ee9181888677a8ae53956696ff8495e595b96 not found: ID does not exist" Dec 06 09:50:12 crc kubenswrapper[4895]: I1206 09:50:12.063216 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" path="/var/lib/kubelet/pods/c458d8ad-b448-40e2-ada7-b46e69922c8f/volumes" Dec 06 09:50:16 crc kubenswrapper[4895]: I1206 09:50:16.051049 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:50:16 crc kubenswrapper[4895]: E1206 09:50:16.051931 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:50:27 crc kubenswrapper[4895]: I1206 09:50:27.050780 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:50:27 crc kubenswrapper[4895]: E1206 09:50:27.052088 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:50:41 crc kubenswrapper[4895]: I1206 09:50:41.051275 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:50:41 crc kubenswrapper[4895]: E1206 09:50:41.052762 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:50:53 crc kubenswrapper[4895]: I1206 09:50:53.051312 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:50:53 crc kubenswrapper[4895]: E1206 09:50:53.052784 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:51:06 crc kubenswrapper[4895]: I1206 09:51:06.196287 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:51:06 crc kubenswrapper[4895]: E1206 09:51:06.197519 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:51:20 crc kubenswrapper[4895]: I1206 09:51:20.052164 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:51:20 crc kubenswrapper[4895]: E1206 09:51:20.052962 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:51:34 crc kubenswrapper[4895]: I1206 09:51:34.051402 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:51:34 crc kubenswrapper[4895]: I1206 09:51:34.647596 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"ef567f04ef08396d9009f8c8476b607e5d5ec039fc92dc919c830d8d79369fdc"} Dec 06 09:53:10 crc kubenswrapper[4895]: I1206 09:53:10.917433 4895 generic.go:334] "Generic (PLEG): container finished" podID="b7d07278-8ff8-402b-8a7b-b2d05efc68fd" containerID="8864637c357164b9f8cdef79d09cf342bedd651830add867a1e5f612bc398db6" exitCode=0 Dec 06 09:53:10 crc kubenswrapper[4895]: I1206 09:53:10.917542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" event={"ID":"b7d07278-8ff8-402b-8a7b-b2d05efc68fd","Type":"ContainerDied","Data":"8864637c357164b9f8cdef79d09cf342bedd651830add867a1e5f612bc398db6"} Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.574521 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.586465 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-2\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.645757 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693089 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ssh-key\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693231 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-1\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693282 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceph\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693303 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-0\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693330 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-inventory\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693366 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-telemetry-combined-ca-bundle\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693453 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzk8c\" (UniqueName: \"kubernetes.io/projected/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-kube-api-access-vzk8c\") pod \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\" (UID: \"b7d07278-8ff8-402b-8a7b-b2d05efc68fd\") " Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.693969 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.698754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceph" (OuterVolumeSpecName: "ceph") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.699582 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-kube-api-access-vzk8c" (OuterVolumeSpecName: "kube-api-access-vzk8c") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "kube-api-access-vzk8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.699847 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.730372 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-inventory" (OuterVolumeSpecName: "inventory") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.733736 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.736909 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.747838 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b7d07278-8ff8-402b-8a7b-b2d05efc68fd" (UID: "b7d07278-8ff8-402b-8a7b-b2d05efc68fd"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805465 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805534 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805552 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805565 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805580 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805599 4895 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.805619 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzk8c\" (UniqueName: \"kubernetes.io/projected/b7d07278-8ff8-402b-8a7b-b2d05efc68fd-kube-api-access-vzk8c\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.945803 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" event={"ID":"b7d07278-8ff8-402b-8a7b-b2d05efc68fd","Type":"ContainerDied","Data":"c307e4d5fa8fb0fd50d73063e34a30ce5a41bd1a4d5c5bf9e184abf8dec7757a"} Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.945884 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c307e4d5fa8fb0fd50d73063e34a30ce5a41bd1a4d5c5bf9e184abf8dec7757a" Dec 06 09:53:12 crc kubenswrapper[4895]: I1206 09:53:12.945961 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k85hm" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.307536 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-7vxxz"] Dec 06 09:53:13 crc kubenswrapper[4895]: E1206 09:53:13.308327 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d07278-8ff8-402b-8a7b-b2d05efc68fd" containerName="telemetry-openstack-openstack-cell1" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.308349 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d07278-8ff8-402b-8a7b-b2d05efc68fd" containerName="telemetry-openstack-openstack-cell1" Dec 06 09:53:13 crc kubenswrapper[4895]: E1206 09:53:13.308368 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="extract-utilities" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.308374 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="extract-utilities" Dec 06 09:53:13 crc kubenswrapper[4895]: E1206 09:53:13.308406 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="registry-server" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.308413 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="registry-server" Dec 06 09:53:13 crc kubenswrapper[4895]: E1206 09:53:13.308431 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="extract-content" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.308437 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="extract-content" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.308688 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d07278-8ff8-402b-8a7b-b2d05efc68fd" containerName="telemetry-openstack-openstack-cell1" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.308714 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c458d8ad-b448-40e2-ada7-b46e69922c8f" containerName="registry-server" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.309446 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.312305 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.312736 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.312970 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.313434 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.316885 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.323848 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-7vxxz"] Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.419801 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmbgl\" (UniqueName: \"kubernetes.io/projected/69859eaf-ab5c-4894-b875-962b9642c277-kube-api-access-kmbgl\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.420180 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.420242 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.420278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.420356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.420527 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.522325 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.522424 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmbgl\" (UniqueName: \"kubernetes.io/projected/69859eaf-ab5c-4894-b875-962b9642c277-kube-api-access-kmbgl\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.522498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.522662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.522681 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.523212 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.527083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.527135 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.527149 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.527317 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.527598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.537402 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmbgl\" (UniqueName: \"kubernetes.io/projected/69859eaf-ab5c-4894-b875-962b9642c277-kube-api-access-kmbgl\") pod \"neutron-sriov-openstack-openstack-cell1-7vxxz\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:13 crc kubenswrapper[4895]: I1206 09:53:13.633517 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:53:14 crc kubenswrapper[4895]: W1206 09:53:14.246618 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69859eaf_ab5c_4894_b875_962b9642c277.slice/crio-c752e7fda5324f3d6b0252f316ecbc31e491510ae3e3721bc2669f5680571a26 WatchSource:0}: Error finding container c752e7fda5324f3d6b0252f316ecbc31e491510ae3e3721bc2669f5680571a26: Status 404 returned error can't find the container with id c752e7fda5324f3d6b0252f316ecbc31e491510ae3e3721bc2669f5680571a26 Dec 06 09:53:14 crc kubenswrapper[4895]: I1206 09:53:14.256203 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-7vxxz"] Dec 06 09:53:14 crc kubenswrapper[4895]: I1206 09:53:14.974997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" event={"ID":"69859eaf-ab5c-4894-b875-962b9642c277","Type":"ContainerStarted","Data":"c752e7fda5324f3d6b0252f316ecbc31e491510ae3e3721bc2669f5680571a26"} Dec 06 09:53:15 crc kubenswrapper[4895]: I1206 09:53:15.990780 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" event={"ID":"69859eaf-ab5c-4894-b875-962b9642c277","Type":"ContainerStarted","Data":"a9a268a6616d595b32fa8f242567ee90c5a35337af53e867ab0fec8c0db69644"} Dec 06 09:53:16 crc kubenswrapper[4895]: I1206 09:53:16.025282 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" podStartSLOduration=2.219883215 podStartE2EDuration="3.025223815s" podCreationTimestamp="2025-12-06 09:53:13 +0000 UTC" firstStartedPulling="2025-12-06 09:53:14.249844967 +0000 UTC m=+10556.651233847" lastFinishedPulling="2025-12-06 09:53:15.055185537 +0000 UTC m=+10557.456574447" observedRunningTime="2025-12-06 09:53:16.016275324 +0000 UTC m=+10558.417664244" watchObservedRunningTime="2025-12-06 09:53:16.025223815 +0000 UTC m=+10558.426612725" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.641527 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xdspr"] Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.645050 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.665925 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdspr"] Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.734225 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-catalog-content\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.734296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-utilities\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.734327 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7dhq\" (UniqueName: \"kubernetes.io/projected/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-kube-api-access-b7dhq\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.836266 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-catalog-content\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.836314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-utilities\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.836333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7dhq\" (UniqueName: \"kubernetes.io/projected/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-kube-api-access-b7dhq\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.836961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-catalog-content\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.837132 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-utilities\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.861315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7dhq\" (UniqueName: \"kubernetes.io/projected/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-kube-api-access-b7dhq\") pod \"redhat-operators-xdspr\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:32 crc kubenswrapper[4895]: I1206 09:53:32.970741 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:33 crc kubenswrapper[4895]: I1206 09:53:33.475163 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdspr"] Dec 06 09:53:34 crc kubenswrapper[4895]: I1206 09:53:34.252939 4895 generic.go:334] "Generic (PLEG): container finished" podID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerID="e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0" exitCode=0 Dec 06 09:53:34 crc kubenswrapper[4895]: I1206 09:53:34.253044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerDied","Data":"e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0"} Dec 06 09:53:34 crc kubenswrapper[4895]: I1206 09:53:34.253209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerStarted","Data":"f782289e9ba87b4c44846b26d7bde1d7d64f858ff7063da193cd11f2566974a5"} Dec 06 09:53:34 crc kubenswrapper[4895]: I1206 09:53:34.256856 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:53:35 crc kubenswrapper[4895]: I1206 09:53:35.266503 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerStarted","Data":"26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116"} Dec 06 09:53:38 crc kubenswrapper[4895]: I1206 09:53:38.325892 4895 generic.go:334] "Generic (PLEG): container finished" podID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerID="26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116" exitCode=0 Dec 06 09:53:38 crc kubenswrapper[4895]: I1206 09:53:38.325943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerDied","Data":"26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116"} Dec 06 09:53:39 crc kubenswrapper[4895]: I1206 09:53:39.337655 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerStarted","Data":"e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33"} Dec 06 09:53:39 crc kubenswrapper[4895]: I1206 09:53:39.373356 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xdspr" podStartSLOduration=2.860806278 podStartE2EDuration="7.37333816s" podCreationTimestamp="2025-12-06 09:53:32 +0000 UTC" firstStartedPulling="2025-12-06 09:53:34.256388291 +0000 UTC m=+10576.657777171" lastFinishedPulling="2025-12-06 09:53:38.768920143 +0000 UTC m=+10581.170309053" observedRunningTime="2025-12-06 09:53:39.356496866 +0000 UTC m=+10581.757885766" watchObservedRunningTime="2025-12-06 09:53:39.37333816 +0000 UTC m=+10581.774727030" Dec 06 09:53:42 crc kubenswrapper[4895]: I1206 09:53:42.971544 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:42 crc kubenswrapper[4895]: I1206 09:53:42.972069 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:44 crc kubenswrapper[4895]: I1206 09:53:44.047871 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xdspr" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="registry-server" probeResult="failure" output=< Dec 06 09:53:44 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 09:53:44 crc kubenswrapper[4895]: > Dec 06 09:53:53 crc kubenswrapper[4895]: I1206 09:53:53.060100 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:53 crc kubenswrapper[4895]: I1206 09:53:53.143377 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:53 crc kubenswrapper[4895]: I1206 09:53:53.321095 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdspr"] Dec 06 09:53:54 crc kubenswrapper[4895]: I1206 09:53:54.540987 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xdspr" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="registry-server" containerID="cri-o://e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33" gracePeriod=2 Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.101377 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.203169 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-catalog-content\") pod \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.203387 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-utilities\") pod \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.203465 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7dhq\" (UniqueName: \"kubernetes.io/projected/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-kube-api-access-b7dhq\") pod \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\" (UID: \"8e4f71ab-0188-45f3-a2a0-006b4b4fd625\") " Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.204196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-utilities" (OuterVolumeSpecName: "utilities") pod "8e4f71ab-0188-45f3-a2a0-006b4b4fd625" (UID: "8e4f71ab-0188-45f3-a2a0-006b4b4fd625"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.204642 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.208898 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-kube-api-access-b7dhq" (OuterVolumeSpecName: "kube-api-access-b7dhq") pod "8e4f71ab-0188-45f3-a2a0-006b4b4fd625" (UID: "8e4f71ab-0188-45f3-a2a0-006b4b4fd625"). InnerVolumeSpecName "kube-api-access-b7dhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.303580 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e4f71ab-0188-45f3-a2a0-006b4b4fd625" (UID: "8e4f71ab-0188-45f3-a2a0-006b4b4fd625"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.306938 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7dhq\" (UniqueName: \"kubernetes.io/projected/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-kube-api-access-b7dhq\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.307014 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f71ab-0188-45f3-a2a0-006b4b4fd625-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.556641 4895 generic.go:334] "Generic (PLEG): container finished" podID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerID="e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33" exitCode=0 Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.556690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerDied","Data":"e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33"} Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.556697 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdspr" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.556732 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdspr" event={"ID":"8e4f71ab-0188-45f3-a2a0-006b4b4fd625","Type":"ContainerDied","Data":"f782289e9ba87b4c44846b26d7bde1d7d64f858ff7063da193cd11f2566974a5"} Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.556779 4895 scope.go:117] "RemoveContainer" containerID="e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.609036 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdspr"] Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.610407 4895 scope.go:117] "RemoveContainer" containerID="26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.622380 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xdspr"] Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.641463 4895 scope.go:117] "RemoveContainer" containerID="e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.692028 4895 scope.go:117] "RemoveContainer" containerID="e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33" Dec 06 09:53:55 crc kubenswrapper[4895]: E1206 09:53:55.692411 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33\": container with ID starting with e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33 not found: ID does not exist" containerID="e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.692446 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33"} err="failed to get container status \"e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33\": rpc error: code = NotFound desc = could not find container \"e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33\": container with ID starting with e1a6546dc209065ad1b127d5256a181a193d922d4baa72c54a77ddb75c807d33 not found: ID does not exist" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.692464 4895 scope.go:117] "RemoveContainer" containerID="26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116" Dec 06 09:53:55 crc kubenswrapper[4895]: E1206 09:53:55.693072 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116\": container with ID starting with 26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116 not found: ID does not exist" containerID="26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.693152 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116"} err="failed to get container status \"26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116\": rpc error: code = NotFound desc = could not find container \"26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116\": container with ID starting with 26c68d249d39547209b208d10b309f74da900a913546cf517a7e48afa0f4e116 not found: ID does not exist" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.693204 4895 scope.go:117] "RemoveContainer" containerID="e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0" Dec 06 09:53:55 crc kubenswrapper[4895]: E1206 09:53:55.693681 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0\": container with ID starting with e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0 not found: ID does not exist" containerID="e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0" Dec 06 09:53:55 crc kubenswrapper[4895]: I1206 09:53:55.693726 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0"} err="failed to get container status \"e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0\": rpc error: code = NotFound desc = could not find container \"e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0\": container with ID starting with e2a70a6012d4ab6611c04deea43597cb69cc46bfba08741046a59078dfad7bd0 not found: ID does not exist" Dec 06 09:53:56 crc kubenswrapper[4895]: I1206 09:53:56.086433 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" path="/var/lib/kubelet/pods/8e4f71ab-0188-45f3-a2a0-006b4b4fd625/volumes" Dec 06 09:53:59 crc kubenswrapper[4895]: I1206 09:53:59.695620 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:53:59 crc kubenswrapper[4895]: I1206 09:53:59.696318 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:54:02 crc kubenswrapper[4895]: I1206 09:54:02.665666 4895 generic.go:334] "Generic (PLEG): container finished" podID="69859eaf-ab5c-4894-b875-962b9642c277" containerID="a9a268a6616d595b32fa8f242567ee90c5a35337af53e867ab0fec8c0db69644" exitCode=0 Dec 06 09:54:02 crc kubenswrapper[4895]: I1206 09:54:02.666037 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" event={"ID":"69859eaf-ab5c-4894-b875-962b9642c277","Type":"ContainerDied","Data":"a9a268a6616d595b32fa8f242567ee90c5a35337af53e867ab0fec8c0db69644"} Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.216941 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.301266 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-combined-ca-bundle\") pod \"69859eaf-ab5c-4894-b875-962b9642c277\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.301402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-inventory\") pod \"69859eaf-ab5c-4894-b875-962b9642c277\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.301510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ssh-key\") pod \"69859eaf-ab5c-4894-b875-962b9642c277\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.301656 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmbgl\" (UniqueName: \"kubernetes.io/projected/69859eaf-ab5c-4894-b875-962b9642c277-kube-api-access-kmbgl\") pod \"69859eaf-ab5c-4894-b875-962b9642c277\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.301777 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-agent-neutron-config-0\") pod \"69859eaf-ab5c-4894-b875-962b9642c277\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.301861 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ceph\") pod \"69859eaf-ab5c-4894-b875-962b9642c277\" (UID: \"69859eaf-ab5c-4894-b875-962b9642c277\") " Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.307387 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "69859eaf-ab5c-4894-b875-962b9642c277" (UID: "69859eaf-ab5c-4894-b875-962b9642c277"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.308420 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69859eaf-ab5c-4894-b875-962b9642c277-kube-api-access-kmbgl" (OuterVolumeSpecName: "kube-api-access-kmbgl") pod "69859eaf-ab5c-4894-b875-962b9642c277" (UID: "69859eaf-ab5c-4894-b875-962b9642c277"). InnerVolumeSpecName "kube-api-access-kmbgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.315698 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ceph" (OuterVolumeSpecName: "ceph") pod "69859eaf-ab5c-4894-b875-962b9642c277" (UID: "69859eaf-ab5c-4894-b875-962b9642c277"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.329719 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-inventory" (OuterVolumeSpecName: "inventory") pod "69859eaf-ab5c-4894-b875-962b9642c277" (UID: "69859eaf-ab5c-4894-b875-962b9642c277"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.336553 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "69859eaf-ab5c-4894-b875-962b9642c277" (UID: "69859eaf-ab5c-4894-b875-962b9642c277"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.337003 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "69859eaf-ab5c-4894-b875-962b9642c277" (UID: "69859eaf-ab5c-4894-b875-962b9642c277"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.406654 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.406689 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.406702 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmbgl\" (UniqueName: \"kubernetes.io/projected/69859eaf-ab5c-4894-b875-962b9642c277-kube-api-access-kmbgl\") on node \"crc\" DevicePath \"\"" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.406717 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.406731 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.406743 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69859eaf-ab5c-4894-b875-962b9642c277-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.689877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" event={"ID":"69859eaf-ab5c-4894-b875-962b9642c277","Type":"ContainerDied","Data":"c752e7fda5324f3d6b0252f316ecbc31e491510ae3e3721bc2669f5680571a26"} Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.690700 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c752e7fda5324f3d6b0252f316ecbc31e491510ae3e3721bc2669f5680571a26" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.689917 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-7vxxz" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840072 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4"] Dec 06 09:54:04 crc kubenswrapper[4895]: E1206 09:54:04.840512 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="registry-server" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840529 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="registry-server" Dec 06 09:54:04 crc kubenswrapper[4895]: E1206 09:54:04.840546 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="extract-utilities" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840553 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="extract-utilities" Dec 06 09:54:04 crc kubenswrapper[4895]: E1206 09:54:04.840563 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69859eaf-ab5c-4894-b875-962b9642c277" containerName="neutron-sriov-openstack-openstack-cell1" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840570 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="69859eaf-ab5c-4894-b875-962b9642c277" containerName="neutron-sriov-openstack-openstack-cell1" Dec 06 09:54:04 crc kubenswrapper[4895]: E1206 09:54:04.840589 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="extract-content" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840595 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="extract-content" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840789 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4f71ab-0188-45f3-a2a0-006b4b4fd625" containerName="registry-server" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.840808 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="69859eaf-ab5c-4894-b875-962b9642c277" containerName="neutron-sriov-openstack-openstack-cell1" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.841491 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.846974 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.847209 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.847304 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.851839 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.852031 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.875919 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4"] Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.920011 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sj6\" (UniqueName: \"kubernetes.io/projected/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-kube-api-access-l8sj6\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.920085 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.920129 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.920192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.920226 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:04 crc kubenswrapper[4895]: I1206 09:54:04.920277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.021992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.022057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.022116 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.022143 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.022187 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.022264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sj6\" (UniqueName: \"kubernetes.io/projected/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-kube-api-access-l8sj6\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.027741 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.027923 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.029177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.031526 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.036537 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.040561 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sj6\" (UniqueName: \"kubernetes.io/projected/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-kube-api-access-l8sj6\") pod \"neutron-dhcp-openstack-openstack-cell1-9cvq4\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.172011 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:54:05 crc kubenswrapper[4895]: I1206 09:54:05.785154 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4"] Dec 06 09:54:05 crc kubenswrapper[4895]: W1206 09:54:05.792860 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf85c53_e5bd_4a7e_ab23_468ca08d317b.slice/crio-64c5168da2ff1532d2bddc4b479f014c23bea58e18b05facb3a2820c66d548ea WatchSource:0}: Error finding container 64c5168da2ff1532d2bddc4b479f014c23bea58e18b05facb3a2820c66d548ea: Status 404 returned error can't find the container with id 64c5168da2ff1532d2bddc4b479f014c23bea58e18b05facb3a2820c66d548ea Dec 06 09:54:06 crc kubenswrapper[4895]: I1206 09:54:06.709391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" event={"ID":"4cf85c53-e5bd-4a7e-ab23-468ca08d317b","Type":"ContainerStarted","Data":"0e5eb5d0c8dff8a3f4ba0ae3e33f332b75233aa79d298c9c8c93a7cd698d0ee8"} Dec 06 09:54:06 crc kubenswrapper[4895]: I1206 09:54:06.709964 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" event={"ID":"4cf85c53-e5bd-4a7e-ab23-468ca08d317b","Type":"ContainerStarted","Data":"64c5168da2ff1532d2bddc4b479f014c23bea58e18b05facb3a2820c66d548ea"} Dec 06 09:54:06 crc kubenswrapper[4895]: I1206 09:54:06.731903 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" podStartSLOduration=2.291621454 podStartE2EDuration="2.731885329s" podCreationTimestamp="2025-12-06 09:54:04 +0000 UTC" firstStartedPulling="2025-12-06 09:54:05.797525225 +0000 UTC m=+10608.198914095" lastFinishedPulling="2025-12-06 09:54:06.2377891 +0000 UTC m=+10608.639177970" observedRunningTime="2025-12-06 09:54:06.724437237 +0000 UTC m=+10609.125826107" watchObservedRunningTime="2025-12-06 09:54:06.731885329 +0000 UTC m=+10609.133274199" Dec 06 09:54:29 crc kubenswrapper[4895]: I1206 09:54:29.695249 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:54:29 crc kubenswrapper[4895]: I1206 09:54:29.695723 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:54:59 crc kubenswrapper[4895]: I1206 09:54:59.696515 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:54:59 crc kubenswrapper[4895]: I1206 09:54:59.697192 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:54:59 crc kubenswrapper[4895]: I1206 09:54:59.697267 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:54:59 crc kubenswrapper[4895]: I1206 09:54:59.698375 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef567f04ef08396d9009f8c8476b607e5d5ec039fc92dc919c830d8d79369fdc"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:54:59 crc kubenswrapper[4895]: I1206 09:54:59.698514 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://ef567f04ef08396d9009f8c8476b607e5d5ec039fc92dc919c830d8d79369fdc" gracePeriod=600 Dec 06 09:55:00 crc kubenswrapper[4895]: I1206 09:55:00.399508 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="ef567f04ef08396d9009f8c8476b607e5d5ec039fc92dc919c830d8d79369fdc" exitCode=0 Dec 06 09:55:00 crc kubenswrapper[4895]: I1206 09:55:00.399852 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"ef567f04ef08396d9009f8c8476b607e5d5ec039fc92dc919c830d8d79369fdc"} Dec 06 09:55:00 crc kubenswrapper[4895]: I1206 09:55:00.399883 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1"} Dec 06 09:55:00 crc kubenswrapper[4895]: I1206 09:55:00.399900 4895 scope.go:117] "RemoveContainer" containerID="844461a4f0542b40c6a22774e2f720ece52afe7c6993cd08bb22787532e38d9e" Dec 06 09:55:11 crc kubenswrapper[4895]: I1206 09:55:11.552501 4895 generic.go:334] "Generic (PLEG): container finished" podID="4cf85c53-e5bd-4a7e-ab23-468ca08d317b" containerID="0e5eb5d0c8dff8a3f4ba0ae3e33f332b75233aa79d298c9c8c93a7cd698d0ee8" exitCode=0 Dec 06 09:55:11 crc kubenswrapper[4895]: I1206 09:55:11.552593 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" event={"ID":"4cf85c53-e5bd-4a7e-ab23-468ca08d317b","Type":"ContainerDied","Data":"0e5eb5d0c8dff8a3f4ba0ae3e33f332b75233aa79d298c9c8c93a7cd698d0ee8"} Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.143239 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.235719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-agent-neutron-config-0\") pod \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.235823 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-combined-ca-bundle\") pod \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.235873 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8sj6\" (UniqueName: \"kubernetes.io/projected/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-kube-api-access-l8sj6\") pod \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.235897 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ssh-key\") pod \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.235931 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-inventory\") pod \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.235969 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ceph\") pod \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\" (UID: \"4cf85c53-e5bd-4a7e-ab23-468ca08d317b\") " Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.241765 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "4cf85c53-e5bd-4a7e-ab23-468ca08d317b" (UID: "4cf85c53-e5bd-4a7e-ab23-468ca08d317b"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.242827 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-kube-api-access-l8sj6" (OuterVolumeSpecName: "kube-api-access-l8sj6") pod "4cf85c53-e5bd-4a7e-ab23-468ca08d317b" (UID: "4cf85c53-e5bd-4a7e-ab23-468ca08d317b"). InnerVolumeSpecName "kube-api-access-l8sj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.254985 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ceph" (OuterVolumeSpecName: "ceph") pod "4cf85c53-e5bd-4a7e-ab23-468ca08d317b" (UID: "4cf85c53-e5bd-4a7e-ab23-468ca08d317b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.268992 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "4cf85c53-e5bd-4a7e-ab23-468ca08d317b" (UID: "4cf85c53-e5bd-4a7e-ab23-468ca08d317b"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.272567 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4cf85c53-e5bd-4a7e-ab23-468ca08d317b" (UID: "4cf85c53-e5bd-4a7e-ab23-468ca08d317b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.283185 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-inventory" (OuterVolumeSpecName: "inventory") pod "4cf85c53-e5bd-4a7e-ab23-468ca08d317b" (UID: "4cf85c53-e5bd-4a7e-ab23-468ca08d317b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.338554 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.338604 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8sj6\" (UniqueName: \"kubernetes.io/projected/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-kube-api-access-l8sj6\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.338625 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.338643 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.338660 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.338678 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cf85c53-e5bd-4a7e-ab23-468ca08d317b-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.588813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" event={"ID":"4cf85c53-e5bd-4a7e-ab23-468ca08d317b","Type":"ContainerDied","Data":"64c5168da2ff1532d2bddc4b479f014c23bea58e18b05facb3a2820c66d548ea"} Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.588876 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c5168da2ff1532d2bddc4b479f014c23bea58e18b05facb3a2820c66d548ea" Dec 06 09:55:13 crc kubenswrapper[4895]: I1206 09:55:13.588935 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-9cvq4" Dec 06 09:55:19 crc kubenswrapper[4895]: I1206 09:55:19.461229 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:55:19 crc kubenswrapper[4895]: I1206 09:55:19.462685 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="ea971646-f4b7-4a2f-bea6-baa488438ed2" containerName="nova-cell0-conductor-conductor" containerID="cri-o://ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195" gracePeriod=30 Dec 06 09:55:19 crc kubenswrapper[4895]: I1206 09:55:19.964205 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:55:19 crc kubenswrapper[4895]: I1206 09:55:19.964428 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="f7585ce4-8758-48b8-b730-86ae49031ba4" containerName="nova-cell1-conductor-conductor" containerID="cri-o://eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c" gracePeriod=30 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.144836 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj"] Dec 06 09:55:20 crc kubenswrapper[4895]: E1206 09:55:20.145309 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf85c53-e5bd-4a7e-ab23-468ca08d317b" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.145328 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf85c53-e5bd-4a7e-ab23-468ca08d317b" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.145554 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf85c53-e5bd-4a7e-ab23-468ca08d317b" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.146306 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.150262 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.150548 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.150617 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.152768 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.153052 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-wfk68" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.155654 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.155812 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.162727 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj"] Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225219 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225290 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225323 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225343 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225496 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225537 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xvx\" (UniqueName: \"kubernetes.io/projected/aa8468c6-6c07-40e9-aa1b-996f099dffa8-kube-api-access-s8xvx\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.225622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.287919 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.288165 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-log" containerID="cri-o://69c79637304ed4434bfded663f373e17e19d4805fe16eabfa0addf2068329e44" gracePeriod=30 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.288254 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-api" containerID="cri-o://c6452e616295cad0bf524278d0603d42e3176fa357b2252bc899778e436c854f" gracePeriod=30 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.298021 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.298250 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" containerName="nova-scheduler-scheduler" containerID="cri-o://d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e" gracePeriod=30 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.328776 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.328844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.328880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.328904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.328937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.328993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.329031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.329069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xvx\" (UniqueName: \"kubernetes.io/projected/aa8468c6-6c07-40e9-aa1b-996f099dffa8-kube-api-access-s8xvx\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.329109 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.329130 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.329150 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.330558 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.330630 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.346376 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.347567 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.348928 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.349330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.349556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.352517 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.352823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xvx\" (UniqueName: \"kubernetes.io/projected/aa8468c6-6c07-40e9-aa1b-996f099dffa8-kube-api-access-s8xvx\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.353434 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.359792 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.407825 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.408082 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-log" containerID="cri-o://bf99679ca790d924780c6b4fbf4c0fb9de2ebb3222a9d2dffa525283bb02f73a" gracePeriod=30 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.408183 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-metadata" containerID="cri-o://95d1c98e9f4ab799948c8bc317eb7fc0385257cea45600e4476a66fd8f653b40" gracePeriod=30 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.522091 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.680258 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerID="bf99679ca790d924780c6b4fbf4c0fb9de2ebb3222a9d2dffa525283bb02f73a" exitCode=143 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.680314 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7bb05f3-943f-467a-98d8-75b754bd0de6","Type":"ContainerDied","Data":"bf99679ca790d924780c6b4fbf4c0fb9de2ebb3222a9d2dffa525283bb02f73a"} Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.682689 4895 generic.go:334] "Generic (PLEG): container finished" podID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerID="69c79637304ed4434bfded663f373e17e19d4805fe16eabfa0addf2068329e44" exitCode=143 Dec 06 09:55:20 crc kubenswrapper[4895]: I1206 09:55:20.682718 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6","Type":"ContainerDied","Data":"69c79637304ed4434bfded663f373e17e19d4805fe16eabfa0addf2068329e44"} Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.298914 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.406446 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj"] Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.486350 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjz56\" (UniqueName: \"kubernetes.io/projected/ea971646-f4b7-4a2f-bea6-baa488438ed2-kube-api-access-zjz56\") pod \"ea971646-f4b7-4a2f-bea6-baa488438ed2\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.486547 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-combined-ca-bundle\") pod \"ea971646-f4b7-4a2f-bea6-baa488438ed2\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.486642 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-config-data\") pod \"ea971646-f4b7-4a2f-bea6-baa488438ed2\" (UID: \"ea971646-f4b7-4a2f-bea6-baa488438ed2\") " Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.501762 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea971646-f4b7-4a2f-bea6-baa488438ed2-kube-api-access-zjz56" (OuterVolumeSpecName: "kube-api-access-zjz56") pod "ea971646-f4b7-4a2f-bea6-baa488438ed2" (UID: "ea971646-f4b7-4a2f-bea6-baa488438ed2"). InnerVolumeSpecName "kube-api-access-zjz56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.524890 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-config-data" (OuterVolumeSpecName: "config-data") pod "ea971646-f4b7-4a2f-bea6-baa488438ed2" (UID: "ea971646-f4b7-4a2f-bea6-baa488438ed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.527031 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea971646-f4b7-4a2f-bea6-baa488438ed2" (UID: "ea971646-f4b7-4a2f-bea6-baa488438ed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.589008 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjz56\" (UniqueName: \"kubernetes.io/projected/ea971646-f4b7-4a2f-bea6-baa488438ed2-kube-api-access-zjz56\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.589058 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.589071 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea971646-f4b7-4a2f-bea6-baa488438ed2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.693751 4895 generic.go:334] "Generic (PLEG): container finished" podID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" containerID="d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e" exitCode=0 Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.693857 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bac7b8cb-0d41-42a2-9044-f0b986ed3503","Type":"ContainerDied","Data":"d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e"} Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.695021 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" event={"ID":"aa8468c6-6c07-40e9-aa1b-996f099dffa8","Type":"ContainerStarted","Data":"f8ce1d5551215b597ce423516ec11907ce0468feb666fe32350a771edb547403"} Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.697514 4895 generic.go:334] "Generic (PLEG): container finished" podID="ea971646-f4b7-4a2f-bea6-baa488438ed2" containerID="ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195" exitCode=0 Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.697548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea971646-f4b7-4a2f-bea6-baa488438ed2","Type":"ContainerDied","Data":"ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195"} Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.697566 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea971646-f4b7-4a2f-bea6-baa488438ed2","Type":"ContainerDied","Data":"a0a07be87328ae162be88674956171f78106c85ac594cee8d6432502fdaa31fc"} Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.697610 4895 scope.go:117] "RemoveContainer" containerID="ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.697751 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.724310 4895 scope.go:117] "RemoveContainer" containerID="ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195" Dec 06 09:55:21 crc kubenswrapper[4895]: E1206 09:55:21.724788 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195\": container with ID starting with ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195 not found: ID does not exist" containerID="ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.724933 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195"} err="failed to get container status \"ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195\": rpc error: code = NotFound desc = could not find container \"ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195\": container with ID starting with ba60628ed020cae25edfaaf7241bf064c0126384dd8dbb19ea52fe7814d9c195 not found: ID does not exist" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.743608 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.758383 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.769844 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:55:21 crc kubenswrapper[4895]: E1206 09:55:21.770321 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea971646-f4b7-4a2f-bea6-baa488438ed2" containerName="nova-cell0-conductor-conductor" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.770338 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea971646-f4b7-4a2f-bea6-baa488438ed2" containerName="nova-cell0-conductor-conductor" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.770576 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea971646-f4b7-4a2f-bea6-baa488438ed2" containerName="nova-cell0-conductor-conductor" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.771346 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.775669 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.786075 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.895382 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.896217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.896543 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjm7c\" (UniqueName: \"kubernetes.io/projected/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-kube-api-access-kjm7c\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.998025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjm7c\" (UniqueName: \"kubernetes.io/projected/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-kube-api-access-kjm7c\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.998331 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:21 crc kubenswrapper[4895]: I1206 09:55:21.998386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.001529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.001914 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.019400 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjm7c\" (UniqueName: \"kubernetes.io/projected/13ad9c9b-8cea-4d11-accc-ac05fd63b14f-kube-api-access-kjm7c\") pod \"nova-cell0-conductor-0\" (UID: \"13ad9c9b-8cea-4d11-accc-ac05fd63b14f\") " pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.068002 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea971646-f4b7-4a2f-bea6-baa488438ed2" path="/var/lib/kubelet/pods/ea971646-f4b7-4a2f-bea6-baa488438ed2/volumes" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.093183 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.455082 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.539220 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl574\" (UniqueName: \"kubernetes.io/projected/f7585ce4-8758-48b8-b730-86ae49031ba4-kube-api-access-rl574\") pod \"f7585ce4-8758-48b8-b730-86ae49031ba4\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.539308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-config-data\") pod \"f7585ce4-8758-48b8-b730-86ae49031ba4\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.539381 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-combined-ca-bundle\") pod \"f7585ce4-8758-48b8-b730-86ae49031ba4\" (UID: \"f7585ce4-8758-48b8-b730-86ae49031ba4\") " Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.545365 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7585ce4-8758-48b8-b730-86ae49031ba4-kube-api-access-rl574" (OuterVolumeSpecName: "kube-api-access-rl574") pod "f7585ce4-8758-48b8-b730-86ae49031ba4" (UID: "f7585ce4-8758-48b8-b730-86ae49031ba4"). InnerVolumeSpecName "kube-api-access-rl574". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.553879 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e is running failed: container process not found" containerID="d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.554346 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e is running failed: container process not found" containerID="d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.554754 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e is running failed: container process not found" containerID="d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.554795 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" containerName="nova-scheduler-scheduler" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.619309 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-config-data" (OuterVolumeSpecName: "config-data") pod "f7585ce4-8758-48b8-b730-86ae49031ba4" (UID: "f7585ce4-8758-48b8-b730-86ae49031ba4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.622301 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7585ce4-8758-48b8-b730-86ae49031ba4" (UID: "f7585ce4-8758-48b8-b730-86ae49031ba4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.641978 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl574\" (UniqueName: \"kubernetes.io/projected/f7585ce4-8758-48b8-b730-86ae49031ba4-kube-api-access-rl574\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.642024 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.642035 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7585ce4-8758-48b8-b730-86ae49031ba4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.700410 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.707977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bac7b8cb-0d41-42a2-9044-f0b986ed3503","Type":"ContainerDied","Data":"5163281b862b691e377d6f4882f859acd28f3df9d4ce7ea812682c64c5c6132e"} Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.708047 4895 scope.go:117] "RemoveContainer" containerID="d520443f2c08ee1165b6cadddd32557a9a131054b57ce3f085b5ee6b79e3e65e" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.707992 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.709975 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" event={"ID":"aa8468c6-6c07-40e9-aa1b-996f099dffa8","Type":"ContainerStarted","Data":"00ae0bc7e735afc29ab37331f207ca7d6d014c7c55b7be6f8a5057c02035f05e"} Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.730283 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.738205 4895 generic.go:334] "Generic (PLEG): container finished" podID="f7585ce4-8758-48b8-b730-86ae49031ba4" containerID="eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c" exitCode=0 Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.738353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7585ce4-8758-48b8-b730-86ae49031ba4","Type":"ContainerDied","Data":"eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c"} Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.738382 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7585ce4-8758-48b8-b730-86ae49031ba4","Type":"ContainerDied","Data":"69c8f65235206afa0246cf80a2ea4b5f7af92038f330962cbe2debb34a742df9"} Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.738400 4895 scope.go:117] "RemoveContainer" containerID="eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.742983 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7w78\" (UniqueName: \"kubernetes.io/projected/bac7b8cb-0d41-42a2-9044-f0b986ed3503-kube-api-access-h7w78\") pod \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.747720 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac7b8cb-0d41-42a2-9044-f0b986ed3503-kube-api-access-h7w78" (OuterVolumeSpecName: "kube-api-access-h7w78") pod "bac7b8cb-0d41-42a2-9044-f0b986ed3503" (UID: "bac7b8cb-0d41-42a2-9044-f0b986ed3503"). InnerVolumeSpecName "kube-api-access-h7w78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.752793 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-combined-ca-bundle\") pod \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.752873 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-config-data\") pod \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\" (UID: \"bac7b8cb-0d41-42a2-9044-f0b986ed3503\") " Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.753781 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7w78\" (UniqueName: \"kubernetes.io/projected/bac7b8cb-0d41-42a2-9044-f0b986ed3503-kube-api-access-h7w78\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.756855 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" podStartSLOduration=2.192360065 podStartE2EDuration="2.756819655s" podCreationTimestamp="2025-12-06 09:55:20 +0000 UTC" firstStartedPulling="2025-12-06 09:55:21.375671689 +0000 UTC m=+10683.777060559" lastFinishedPulling="2025-12-06 09:55:21.940131279 +0000 UTC m=+10684.341520149" observedRunningTime="2025-12-06 09:55:22.743595536 +0000 UTC m=+10685.144984406" watchObservedRunningTime="2025-12-06 09:55:22.756819655 +0000 UTC m=+10685.158208525" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.779846 4895 scope.go:117] "RemoveContainer" containerID="eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c" Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.780261 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c\": container with ID starting with eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c not found: ID does not exist" containerID="eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.780301 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c"} err="failed to get container status \"eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c\": rpc error: code = NotFound desc = could not find container \"eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c\": container with ID starting with eab9d09da741a8941c8ea7f417a168fe4281a0288a30f1fd3602b09aac105d2c not found: ID does not exist" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.780346 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.812442 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-config-data" (OuterVolumeSpecName: "config-data") pod "bac7b8cb-0d41-42a2-9044-f0b986ed3503" (UID: "bac7b8cb-0d41-42a2-9044-f0b986ed3503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.815744 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.815891 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bac7b8cb-0d41-42a2-9044-f0b986ed3503" (UID: "bac7b8cb-0d41-42a2-9044-f0b986ed3503"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:22 crc kubenswrapper[4895]: W1206 09:55:22.816547 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13ad9c9b_8cea_4d11_accc_ac05fd63b14f.slice/crio-5820be8ab7aba638cfd2094592457f72027a8517f6d85aaf0f93af1f6e2c2d1c WatchSource:0}: Error finding container 5820be8ab7aba638cfd2094592457f72027a8517f6d85aaf0f93af1f6e2c2d1c: Status 404 returned error can't find the container with id 5820be8ab7aba638cfd2094592457f72027a8517f6d85aaf0f93af1f6e2c2d1c Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.832722 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.833484 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7585ce4-8758-48b8-b730-86ae49031ba4" containerName="nova-cell1-conductor-conductor" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.833503 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7585ce4-8758-48b8-b730-86ae49031ba4" containerName="nova-cell1-conductor-conductor" Dec 06 09:55:22 crc kubenswrapper[4895]: E1206 09:55:22.833535 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" containerName="nova-scheduler-scheduler" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.833542 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" containerName="nova-scheduler-scheduler" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.833733 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" containerName="nova-scheduler-scheduler" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.833763 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7585ce4-8758-48b8-b730-86ae49031ba4" containerName="nova-cell1-conductor-conductor" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.834514 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.836760 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.853414 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.855577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9nb4\" (UniqueName: \"kubernetes.io/projected/2aa4cb5a-ff24-48db-8689-e1e6703b690e-kube-api-access-h9nb4\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.855623 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa4cb5a-ff24-48db-8689-e1e6703b690e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.855730 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa4cb5a-ff24-48db-8689-e1e6703b690e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.855784 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.855795 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac7b8cb-0d41-42a2-9044-f0b986ed3503-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.868430 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.957708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9nb4\" (UniqueName: \"kubernetes.io/projected/2aa4cb5a-ff24-48db-8689-e1e6703b690e-kube-api-access-h9nb4\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.957754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa4cb5a-ff24-48db-8689-e1e6703b690e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.957859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa4cb5a-ff24-48db-8689-e1e6703b690e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.961423 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa4cb5a-ff24-48db-8689-e1e6703b690e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.966226 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa4cb5a-ff24-48db-8689-e1e6703b690e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:22 crc kubenswrapper[4895]: I1206 09:55:22.975081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9nb4\" (UniqueName: \"kubernetes.io/projected/2aa4cb5a-ff24-48db-8689-e1e6703b690e-kube-api-access-h9nb4\") pod \"nova-cell1-conductor-0\" (UID: \"2aa4cb5a-ff24-48db-8689-e1e6703b690e\") " pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.042209 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.055343 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.064679 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.066632 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.068486 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.074512 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.152685 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.162130 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55923ef-c925-48ba-b959-a7a1d5212e60-config-data\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.162186 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7n2j\" (UniqueName: \"kubernetes.io/projected/e55923ef-c925-48ba-b959-a7a1d5212e60-kube-api-access-q7n2j\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.162343 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55923ef-c925-48ba-b959-a7a1d5212e60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.264980 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55923ef-c925-48ba-b959-a7a1d5212e60-config-data\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.265315 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7n2j\" (UniqueName: \"kubernetes.io/projected/e55923ef-c925-48ba-b959-a7a1d5212e60-kube-api-access-q7n2j\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.265354 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55923ef-c925-48ba-b959-a7a1d5212e60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.270422 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55923ef-c925-48ba-b959-a7a1d5212e60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.272172 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55923ef-c925-48ba-b959-a7a1d5212e60-config-data\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.291337 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7n2j\" (UniqueName: \"kubernetes.io/projected/e55923ef-c925-48ba-b959-a7a1d5212e60-kube-api-access-q7n2j\") pod \"nova-scheduler-0\" (UID: \"e55923ef-c925-48ba-b959-a7a1d5212e60\") " pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.388302 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.531117 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": read tcp 10.217.0.2:36440->10.217.1.98:8774: read: connection reset by peer" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.531181 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": read tcp 10.217.0.2:36426->10.217.1.98:8774: read: connection reset by peer" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.592520 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.595186 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.99:8775/\": read tcp 10.217.0.2:37650->10.217.1.99:8775: read: connection reset by peer" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.595558 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.99:8775/\": read tcp 10.217.0.2:37660->10.217.1.99:8775: read: connection reset by peer" Dec 06 09:55:23 crc kubenswrapper[4895]: W1206 09:55:23.626251 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2aa4cb5a_ff24_48db_8689_e1e6703b690e.slice/crio-fe788773f55f8bdde42796421813c3f50264ed9e564f58e44d07290e6dd3d12a WatchSource:0}: Error finding container fe788773f55f8bdde42796421813c3f50264ed9e564f58e44d07290e6dd3d12a: Status 404 returned error can't find the container with id fe788773f55f8bdde42796421813c3f50264ed9e564f58e44d07290e6dd3d12a Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.761785 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13ad9c9b-8cea-4d11-accc-ac05fd63b14f","Type":"ContainerStarted","Data":"a3c778fc05c6944a5941d13854f0b9241500013170b8131d782c61d524301c97"} Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.761829 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13ad9c9b-8cea-4d11-accc-ac05fd63b14f","Type":"ContainerStarted","Data":"5820be8ab7aba638cfd2094592457f72027a8517f6d85aaf0f93af1f6e2c2d1c"} Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.761870 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.764390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2aa4cb5a-ff24-48db-8689-e1e6703b690e","Type":"ContainerStarted","Data":"fe788773f55f8bdde42796421813c3f50264ed9e564f58e44d07290e6dd3d12a"} Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.766420 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerID="95d1c98e9f4ab799948c8bc317eb7fc0385257cea45600e4476a66fd8f653b40" exitCode=0 Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.766513 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7bb05f3-943f-467a-98d8-75b754bd0de6","Type":"ContainerDied","Data":"95d1c98e9f4ab799948c8bc317eb7fc0385257cea45600e4476a66fd8f653b40"} Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.768272 4895 generic.go:334] "Generic (PLEG): container finished" podID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerID="c6452e616295cad0bf524278d0603d42e3176fa357b2252bc899778e436c854f" exitCode=0 Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.768337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6","Type":"ContainerDied","Data":"c6452e616295cad0bf524278d0603d42e3176fa357b2252bc899778e436c854f"} Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.790234 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.790215151 podStartE2EDuration="2.790215151s" podCreationTimestamp="2025-12-06 09:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:55:23.779964742 +0000 UTC m=+10686.181353612" watchObservedRunningTime="2025-12-06 09:55:23.790215151 +0000 UTC m=+10686.191604021" Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.892344 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 09:55:23 crc kubenswrapper[4895]: I1206 09:55:23.976495 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.084267 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac7b8cb-0d41-42a2-9044-f0b986ed3503" path="/var/lib/kubelet/pods/bac7b8cb-0d41-42a2-9044-f0b986ed3503/volumes" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.084950 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7585ce4-8758-48b8-b730-86ae49031ba4" path="/var/lib/kubelet/pods/f7585ce4-8758-48b8-b730-86ae49031ba4/volumes" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.090333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czcfg\" (UniqueName: \"kubernetes.io/projected/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-kube-api-access-czcfg\") pod \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.090382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-logs\") pod \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.090491 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-config-data\") pod \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.090594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-combined-ca-bundle\") pod \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\" (UID: \"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.091766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-logs" (OuterVolumeSpecName: "logs") pod "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" (UID: "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.099836 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-kube-api-access-czcfg" (OuterVolumeSpecName: "kube-api-access-czcfg") pod "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" (UID: "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6"). InnerVolumeSpecName "kube-api-access-czcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.158219 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-config-data" (OuterVolumeSpecName: "config-data") pod "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" (UID: "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.178661 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.206222 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" (UID: "1a1405e0-ea5d-4617-a1f0-7cff7e9abee6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.206948 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mktnm\" (UniqueName: \"kubernetes.io/projected/e7bb05f3-943f-467a-98d8-75b754bd0de6-kube-api-access-mktnm\") pod \"e7bb05f3-943f-467a-98d8-75b754bd0de6\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.207150 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bb05f3-943f-467a-98d8-75b754bd0de6-logs\") pod \"e7bb05f3-943f-467a-98d8-75b754bd0de6\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.207232 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-config-data\") pod \"e7bb05f3-943f-467a-98d8-75b754bd0de6\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.207277 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-combined-ca-bundle\") pod \"e7bb05f3-943f-467a-98d8-75b754bd0de6\" (UID: \"e7bb05f3-943f-467a-98d8-75b754bd0de6\") " Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.208284 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.208306 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.208319 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czcfg\" (UniqueName: \"kubernetes.io/projected/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-kube-api-access-czcfg\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.208330 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.208880 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7bb05f3-943f-467a-98d8-75b754bd0de6-logs" (OuterVolumeSpecName: "logs") pod "e7bb05f3-943f-467a-98d8-75b754bd0de6" (UID: "e7bb05f3-943f-467a-98d8-75b754bd0de6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.236773 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7bb05f3-943f-467a-98d8-75b754bd0de6-kube-api-access-mktnm" (OuterVolumeSpecName: "kube-api-access-mktnm") pod "e7bb05f3-943f-467a-98d8-75b754bd0de6" (UID: "e7bb05f3-943f-467a-98d8-75b754bd0de6"). InnerVolumeSpecName "kube-api-access-mktnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.238754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7bb05f3-943f-467a-98d8-75b754bd0de6" (UID: "e7bb05f3-943f-467a-98d8-75b754bd0de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.257195 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-config-data" (OuterVolumeSpecName: "config-data") pod "e7bb05f3-943f-467a-98d8-75b754bd0de6" (UID: "e7bb05f3-943f-467a-98d8-75b754bd0de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.317177 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mktnm\" (UniqueName: \"kubernetes.io/projected/e7bb05f3-943f-467a-98d8-75b754bd0de6-kube-api-access-mktnm\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.317207 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bb05f3-943f-467a-98d8-75b754bd0de6-logs\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.317218 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.317228 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bb05f3-943f-467a-98d8-75b754bd0de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.783739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2aa4cb5a-ff24-48db-8689-e1e6703b690e","Type":"ContainerStarted","Data":"85a5d95317b76955f70e148ef171e3754d3d361d77bcb5c2d8dcfc3c87140590"} Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.783829 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.795164 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7bb05f3-943f-467a-98d8-75b754bd0de6","Type":"ContainerDied","Data":"e3e6074499fec1857a273824b49c2fb486589a0f2e26cd4150d7ce45e1f0def0"} Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.795644 4895 scope.go:117] "RemoveContainer" containerID="95d1c98e9f4ab799948c8bc317eb7fc0385257cea45600e4476a66fd8f653b40" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.795213 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.816797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e55923ef-c925-48ba-b959-a7a1d5212e60","Type":"ContainerStarted","Data":"373626550a7d81325e0b3fe3b7bd5db9641ca17fc0b5e2afb3eb224c1d5098c1"} Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.816841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e55923ef-c925-48ba-b959-a7a1d5212e60","Type":"ContainerStarted","Data":"9408349f6d11808080288031d3bb9703b3f2fc31121649f93fafbe3afd45cd80"} Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.828432 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.828415477 podStartE2EDuration="2.828415477s" podCreationTimestamp="2025-12-06 09:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:55:24.808880876 +0000 UTC m=+10687.210269766" watchObservedRunningTime="2025-12-06 09:55:24.828415477 +0000 UTC m=+10687.229804347" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.831194 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.831554 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a1405e0-ea5d-4617-a1f0-7cff7e9abee6","Type":"ContainerDied","Data":"bfc2bbafe8e1b49e544414e13de59dc2b0f17bf34d9bc528161751b19cab5f6e"} Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.841088 4895 scope.go:117] "RemoveContainer" containerID="bf99679ca790d924780c6b4fbf4c0fb9de2ebb3222a9d2dffa525283bb02f73a" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.852535 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.880505 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.893930 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: E1206 09:55:24.894332 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-api" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894342 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-api" Dec 06 09:55:24 crc kubenswrapper[4895]: E1206 09:55:24.894371 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-log" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894377 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-log" Dec 06 09:55:24 crc kubenswrapper[4895]: E1206 09:55:24.894391 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-metadata" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894397 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-metadata" Dec 06 09:55:24 crc kubenswrapper[4895]: E1206 09:55:24.894422 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-log" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894428 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-log" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894608 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-metadata" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894628 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-api" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894647 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" containerName="nova-metadata-log" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.894656 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" containerName="nova-api-log" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.895747 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.896087 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.896069506 podStartE2EDuration="1.896069506s" podCreationTimestamp="2025-12-06 09:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:55:24.86235473 +0000 UTC m=+10687.263743600" watchObservedRunningTime="2025-12-06 09:55:24.896069506 +0000 UTC m=+10687.297458376" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.912653 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.913531 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.946000 4895 scope.go:117] "RemoveContainer" containerID="c6452e616295cad0bf524278d0603d42e3176fa357b2252bc899778e436c854f" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.957079 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa851486-91c2-4c06-9abc-4021ba5a4fd9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.957247 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa851486-91c2-4c06-9abc-4021ba5a4fd9-config-data\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.957422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t876\" (UniqueName: \"kubernetes.io/projected/aa851486-91c2-4c06-9abc-4021ba5a4fd9-kube-api-access-7t876\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.957674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa851486-91c2-4c06-9abc-4021ba5a4fd9-logs\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.966143 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.980534 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.988526 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.994233 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.995957 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 09:55:24 crc kubenswrapper[4895]: I1206 09:55:24.998664 4895 scope.go:117] "RemoveContainer" containerID="69c79637304ed4434bfded663f373e17e19d4805fe16eabfa0addf2068329e44" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.005983 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.059270 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa851486-91c2-4c06-9abc-4021ba5a4fd9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.060002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa851486-91c2-4c06-9abc-4021ba5a4fd9-config-data\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.060066 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t876\" (UniqueName: \"kubernetes.io/projected/aa851486-91c2-4c06-9abc-4021ba5a4fd9-kube-api-access-7t876\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.060088 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa851486-91c2-4c06-9abc-4021ba5a4fd9-logs\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.060431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa851486-91c2-4c06-9abc-4021ba5a4fd9-logs\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.064237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa851486-91c2-4c06-9abc-4021ba5a4fd9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.075095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa851486-91c2-4c06-9abc-4021ba5a4fd9-config-data\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.082824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t876\" (UniqueName: \"kubernetes.io/projected/aa851486-91c2-4c06-9abc-4021ba5a4fd9-kube-api-access-7t876\") pod \"nova-metadata-0\" (UID: \"aa851486-91c2-4c06-9abc-4021ba5a4fd9\") " pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.186700 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358d73a2-1190-44fe-8154-3713df01a941-config-data\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.186800 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358d73a2-1190-44fe-8154-3713df01a941-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.186849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hj2p\" (UniqueName: \"kubernetes.io/projected/358d73a2-1190-44fe-8154-3713df01a941-kube-api-access-4hj2p\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.187003 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/358d73a2-1190-44fe-8154-3713df01a941-logs\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.232978 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.298705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358d73a2-1190-44fe-8154-3713df01a941-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.298786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hj2p\" (UniqueName: \"kubernetes.io/projected/358d73a2-1190-44fe-8154-3713df01a941-kube-api-access-4hj2p\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.298887 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/358d73a2-1190-44fe-8154-3713df01a941-logs\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.298929 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358d73a2-1190-44fe-8154-3713df01a941-config-data\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.301021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/358d73a2-1190-44fe-8154-3713df01a941-logs\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.307134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/358d73a2-1190-44fe-8154-3713df01a941-config-data\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.318402 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/358d73a2-1190-44fe-8154-3713df01a941-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.338353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hj2p\" (UniqueName: \"kubernetes.io/projected/358d73a2-1190-44fe-8154-3713df01a941-kube-api-access-4hj2p\") pod \"nova-api-0\" (UID: \"358d73a2-1190-44fe-8154-3713df01a941\") " pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.614344 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 09:55:25 crc kubenswrapper[4895]: I1206 09:55:25.886513 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.061943 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a1405e0-ea5d-4617-a1f0-7cff7e9abee6" path="/var/lib/kubelet/pods/1a1405e0-ea5d-4617-a1f0-7cff7e9abee6/volumes" Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.062919 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7bb05f3-943f-467a-98d8-75b754bd0de6" path="/var/lib/kubelet/pods/e7bb05f3-943f-467a-98d8-75b754bd0de6/volumes" Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.151967 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.865940 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"358d73a2-1190-44fe-8154-3713df01a941","Type":"ContainerStarted","Data":"775b871c842959d0c6a32e3613eab94a81ba9a1a298331c89bd7d5fd609df95c"} Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.866315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"358d73a2-1190-44fe-8154-3713df01a941","Type":"ContainerStarted","Data":"32eae3e3414495fb337aad537781d32232d6ed76c8a93cbb32194bd1cba9e2b6"} Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.866330 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"358d73a2-1190-44fe-8154-3713df01a941","Type":"ContainerStarted","Data":"c329783d12511a985a6ccc59486e04dbaf6b94e6706e4328f8282c2f6fbf74fb"} Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.876147 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa851486-91c2-4c06-9abc-4021ba5a4fd9","Type":"ContainerStarted","Data":"3a0456c0d89f4b1d30df2b57c327fa7e9a4e1237684300c173cdf094e5f3c095"} Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.876203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa851486-91c2-4c06-9abc-4021ba5a4fd9","Type":"ContainerStarted","Data":"156dd7813b7e1f5a91e853637ac92473f7d3ec6ba04b63d83de66c7460802320"} Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.876220 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa851486-91c2-4c06-9abc-4021ba5a4fd9","Type":"ContainerStarted","Data":"b2b2661fcdfe6753b7f288869bba51a918fb863ca57e6a14378d8a3ca271b713"} Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.891923 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.891904589 podStartE2EDuration="2.891904589s" podCreationTimestamp="2025-12-06 09:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:55:26.88570912 +0000 UTC m=+10689.287098010" watchObservedRunningTime="2025-12-06 09:55:26.891904589 +0000 UTC m=+10689.293293459" Dec 06 09:55:26 crc kubenswrapper[4895]: I1206 09:55:26.916363 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.916340403 podStartE2EDuration="2.916340403s" podCreationTimestamp="2025-12-06 09:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 09:55:26.906362082 +0000 UTC m=+10689.307750952" watchObservedRunningTime="2025-12-06 09:55:26.916340403 +0000 UTC m=+10689.317729273" Dec 06 09:55:27 crc kubenswrapper[4895]: I1206 09:55:27.128865 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 09:55:28 crc kubenswrapper[4895]: I1206 09:55:28.194465 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 09:55:28 crc kubenswrapper[4895]: I1206 09:55:28.389316 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 09:55:30 crc kubenswrapper[4895]: I1206 09:55:30.233139 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:55:30 crc kubenswrapper[4895]: I1206 09:55:30.233563 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 09:55:33 crc kubenswrapper[4895]: I1206 09:55:33.389006 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 09:55:33 crc kubenswrapper[4895]: I1206 09:55:33.420460 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 09:55:34 crc kubenswrapper[4895]: I1206 09:55:34.030860 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 09:55:35 crc kubenswrapper[4895]: I1206 09:55:35.233451 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:55:35 crc kubenswrapper[4895]: I1206 09:55:35.233966 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 09:55:35 crc kubenswrapper[4895]: I1206 09:55:35.616065 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:55:35 crc kubenswrapper[4895]: I1206 09:55:35.616136 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 09:55:36 crc kubenswrapper[4895]: I1206 09:55:36.317691 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa851486-91c2-4c06-9abc-4021ba5a4fd9" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:55:36 crc kubenswrapper[4895]: I1206 09:55:36.317692 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa851486-91c2-4c06-9abc-4021ba5a4fd9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:55:36 crc kubenswrapper[4895]: I1206 09:55:36.656713 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="358d73a2-1190-44fe-8154-3713df01a941" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:55:36 crc kubenswrapper[4895]: I1206 09:55:36.697636 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="358d73a2-1190-44fe-8154-3713df01a941" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.235174 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.235814 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.238344 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.241783 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.625396 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.626627 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.632617 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 09:55:45 crc kubenswrapper[4895]: I1206 09:55:45.632713 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 09:55:46 crc kubenswrapper[4895]: I1206 09:55:46.094280 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 09:55:46 crc kubenswrapper[4895]: I1206 09:55:46.098229 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 09:56:15 crc kubenswrapper[4895]: I1206 09:56:15.840344 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9x89x"] Dec 06 09:56:15 crc kubenswrapper[4895]: I1206 09:56:15.845197 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:15 crc kubenswrapper[4895]: I1206 09:56:15.868384 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9x89x"] Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.010079 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-catalog-content\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.010184 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-utilities\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.010347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb8lb\" (UniqueName: \"kubernetes.io/projected/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-kube-api-access-jb8lb\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.111835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-utilities\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.111889 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb8lb\" (UniqueName: \"kubernetes.io/projected/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-kube-api-access-jb8lb\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.112019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-catalog-content\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.112347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-utilities\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.112403 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-catalog-content\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.144268 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb8lb\" (UniqueName: \"kubernetes.io/projected/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-kube-api-access-jb8lb\") pod \"redhat-marketplace-9x89x\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.172754 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:16 crc kubenswrapper[4895]: I1206 09:56:16.750106 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9x89x"] Dec 06 09:56:16 crc kubenswrapper[4895]: W1206 09:56:16.775678 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd26cd5d7_9aab_4e66_9d26_f5eac7c8d2df.slice/crio-303fb2922e2a01bfe46654b90a695dd5bcb3b93f26728a0487d59b00f1a0135a WatchSource:0}: Error finding container 303fb2922e2a01bfe46654b90a695dd5bcb3b93f26728a0487d59b00f1a0135a: Status 404 returned error can't find the container with id 303fb2922e2a01bfe46654b90a695dd5bcb3b93f26728a0487d59b00f1a0135a Dec 06 09:56:17 crc kubenswrapper[4895]: I1206 09:56:17.515426 4895 generic.go:334] "Generic (PLEG): container finished" podID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerID="81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8" exitCode=0 Dec 06 09:56:17 crc kubenswrapper[4895]: I1206 09:56:17.515543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerDied","Data":"81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8"} Dec 06 09:56:17 crc kubenswrapper[4895]: I1206 09:56:17.515874 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerStarted","Data":"303fb2922e2a01bfe46654b90a695dd5bcb3b93f26728a0487d59b00f1a0135a"} Dec 06 09:56:18 crc kubenswrapper[4895]: I1206 09:56:18.530638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerStarted","Data":"4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256"} Dec 06 09:56:19 crc kubenswrapper[4895]: I1206 09:56:19.547184 4895 generic.go:334] "Generic (PLEG): container finished" podID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerID="4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256" exitCode=0 Dec 06 09:56:19 crc kubenswrapper[4895]: I1206 09:56:19.547559 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerDied","Data":"4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256"} Dec 06 09:56:20 crc kubenswrapper[4895]: I1206 09:56:20.574171 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerStarted","Data":"4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83"} Dec 06 09:56:20 crc kubenswrapper[4895]: I1206 09:56:20.599114 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9x89x" podStartSLOduration=3.151285685 podStartE2EDuration="5.599091443s" podCreationTimestamp="2025-12-06 09:56:15 +0000 UTC" firstStartedPulling="2025-12-06 09:56:17.519541796 +0000 UTC m=+10739.920930666" lastFinishedPulling="2025-12-06 09:56:19.967347544 +0000 UTC m=+10742.368736424" observedRunningTime="2025-12-06 09:56:20.59711933 +0000 UTC m=+10742.998508210" watchObservedRunningTime="2025-12-06 09:56:20.599091443 +0000 UTC m=+10743.000480323" Dec 06 09:56:26 crc kubenswrapper[4895]: I1206 09:56:26.173945 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:26 crc kubenswrapper[4895]: I1206 09:56:26.174491 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:26 crc kubenswrapper[4895]: I1206 09:56:26.241466 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:26 crc kubenswrapper[4895]: I1206 09:56:26.728484 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:29 crc kubenswrapper[4895]: I1206 09:56:29.808033 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9x89x"] Dec 06 09:56:29 crc kubenswrapper[4895]: I1206 09:56:29.808726 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9x89x" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="registry-server" containerID="cri-o://4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83" gracePeriod=2 Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.397451 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.475805 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-utilities\") pod \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.475932 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb8lb\" (UniqueName: \"kubernetes.io/projected/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-kube-api-access-jb8lb\") pod \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.475992 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-catalog-content\") pod \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\" (UID: \"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df\") " Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.477407 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-utilities" (OuterVolumeSpecName: "utilities") pod "d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" (UID: "d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.489180 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-kube-api-access-jb8lb" (OuterVolumeSpecName: "kube-api-access-jb8lb") pod "d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" (UID: "d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df"). InnerVolumeSpecName "kube-api-access-jb8lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.500659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" (UID: "d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.579545 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.579589 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb8lb\" (UniqueName: \"kubernetes.io/projected/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-kube-api-access-jb8lb\") on node \"crc\" DevicePath \"\"" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.579600 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.709334 4895 generic.go:334] "Generic (PLEG): container finished" podID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerID="4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83" exitCode=0 Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.709413 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9x89x" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.709412 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerDied","Data":"4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83"} Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.709609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9x89x" event={"ID":"d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df","Type":"ContainerDied","Data":"303fb2922e2a01bfe46654b90a695dd5bcb3b93f26728a0487d59b00f1a0135a"} Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.709651 4895 scope.go:117] "RemoveContainer" containerID="4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.746182 4895 scope.go:117] "RemoveContainer" containerID="4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.758615 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9x89x"] Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.773497 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9x89x"] Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.807778 4895 scope.go:117] "RemoveContainer" containerID="81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.835719 4895 scope.go:117] "RemoveContainer" containerID="4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83" Dec 06 09:56:30 crc kubenswrapper[4895]: E1206 09:56:30.836711 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83\": container with ID starting with 4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83 not found: ID does not exist" containerID="4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.836779 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83"} err="failed to get container status \"4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83\": rpc error: code = NotFound desc = could not find container \"4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83\": container with ID starting with 4cb2768544c71eb28f5a56915f9253ba8e9e7a460c1b8a3ed7fcc3e440c6ee83 not found: ID does not exist" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.836840 4895 scope.go:117] "RemoveContainer" containerID="4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256" Dec 06 09:56:30 crc kubenswrapper[4895]: E1206 09:56:30.837298 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256\": container with ID starting with 4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256 not found: ID does not exist" containerID="4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.837329 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256"} err="failed to get container status \"4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256\": rpc error: code = NotFound desc = could not find container \"4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256\": container with ID starting with 4d3c4a1544281eec05e326d6149269fbb1034039d63edc55636a46208b947256 not found: ID does not exist" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.837380 4895 scope.go:117] "RemoveContainer" containerID="81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8" Dec 06 09:56:30 crc kubenswrapper[4895]: E1206 09:56:30.837935 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8\": container with ID starting with 81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8 not found: ID does not exist" containerID="81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8" Dec 06 09:56:30 crc kubenswrapper[4895]: I1206 09:56:30.837985 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8"} err="failed to get container status \"81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8\": rpc error: code = NotFound desc = could not find container \"81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8\": container with ID starting with 81761b8f6ecdfd649cdf0758b73006206d9f508f459bd824891c72099c7ec8b8 not found: ID does not exist" Dec 06 09:56:32 crc kubenswrapper[4895]: I1206 09:56:32.064666 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" path="/var/lib/kubelet/pods/d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df/volumes" Dec 06 09:57:29 crc kubenswrapper[4895]: I1206 09:57:29.696280 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:57:29 crc kubenswrapper[4895]: I1206 09:57:29.696946 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:57:59 crc kubenswrapper[4895]: I1206 09:57:59.696130 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:57:59 crc kubenswrapper[4895]: I1206 09:57:59.696783 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:58:23 crc kubenswrapper[4895]: I1206 09:58:23.207691 4895 generic.go:334] "Generic (PLEG): container finished" podID="aa8468c6-6c07-40e9-aa1b-996f099dffa8" containerID="00ae0bc7e735afc29ab37331f207ca7d6d014c7c55b7be6f8a5057c02035f05e" exitCode=0 Dec 06 09:58:23 crc kubenswrapper[4895]: I1206 09:58:23.207844 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" event={"ID":"aa8468c6-6c07-40e9-aa1b-996f099dffa8","Type":"ContainerDied","Data":"00ae0bc7e735afc29ab37331f207ca7d6d014c7c55b7be6f8a5057c02035f05e"} Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.734247 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881532 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-0\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ssh-key\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881656 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-1\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881688 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ceph\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881707 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-1\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881733 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8xvx\" (UniqueName: \"kubernetes.io/projected/aa8468c6-6c07-40e9-aa1b-996f099dffa8-kube-api-access-s8xvx\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881755 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-1\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881778 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-0\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881830 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-0\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881908 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-inventory\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.881936 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-combined-ca-bundle\") pod \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\" (UID: \"aa8468c6-6c07-40e9-aa1b-996f099dffa8\") " Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.887666 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.889141 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ceph" (OuterVolumeSpecName: "ceph") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.890502 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8468c6-6c07-40e9-aa1b-996f099dffa8-kube-api-access-s8xvx" (OuterVolumeSpecName: "kube-api-access-s8xvx") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "kube-api-access-s8xvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.913989 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.915975 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.922887 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.922910 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.924020 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.924079 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.934906 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-inventory" (OuterVolumeSpecName: "inventory") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.951581 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "aa8468c6-6c07-40e9-aa1b-996f099dffa8" (UID: "aa8468c6-6c07-40e9-aa1b-996f099dffa8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.984880 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.984939 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.984959 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.984979 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.984997 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.985017 4895 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.985036 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.985052 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.985069 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8xvx\" (UniqueName: \"kubernetes.io/projected/aa8468c6-6c07-40e9-aa1b-996f099dffa8-kube-api-access-s8xvx\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.985086 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:24 crc kubenswrapper[4895]: I1206 09:58:24.985103 4895 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/aa8468c6-6c07-40e9-aa1b-996f099dffa8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 06 09:58:25 crc kubenswrapper[4895]: I1206 09:58:25.235131 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" event={"ID":"aa8468c6-6c07-40e9-aa1b-996f099dffa8","Type":"ContainerDied","Data":"f8ce1d5551215b597ce423516ec11907ce0468feb666fe32350a771edb547403"} Dec 06 09:58:25 crc kubenswrapper[4895]: I1206 09:58:25.235534 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ce1d5551215b597ce423516ec11907ce0468feb666fe32350a771edb547403" Dec 06 09:58:25 crc kubenswrapper[4895]: I1206 09:58:25.235219 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj" Dec 06 09:58:29 crc kubenswrapper[4895]: I1206 09:58:29.695788 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 09:58:29 crc kubenswrapper[4895]: I1206 09:58:29.696516 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 09:58:29 crc kubenswrapper[4895]: I1206 09:58:29.696583 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 09:58:29 crc kubenswrapper[4895]: I1206 09:58:29.697559 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 09:58:29 crc kubenswrapper[4895]: I1206 09:58:29.697632 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" gracePeriod=600 Dec 06 09:58:29 crc kubenswrapper[4895]: E1206 09:58:29.829116 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:58:30 crc kubenswrapper[4895]: I1206 09:58:30.296222 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" exitCode=0 Dec 06 09:58:30 crc kubenswrapper[4895]: I1206 09:58:30.296280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1"} Dec 06 09:58:30 crc kubenswrapper[4895]: I1206 09:58:30.296594 4895 scope.go:117] "RemoveContainer" containerID="ef567f04ef08396d9009f8c8476b607e5d5ec039fc92dc919c830d8d79369fdc" Dec 06 09:58:30 crc kubenswrapper[4895]: I1206 09:58:30.297430 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:58:30 crc kubenswrapper[4895]: E1206 09:58:30.297806 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:58:42 crc kubenswrapper[4895]: I1206 09:58:42.057815 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:58:42 crc kubenswrapper[4895]: E1206 09:58:42.059168 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:58:54 crc kubenswrapper[4895]: I1206 09:58:54.051938 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:58:54 crc kubenswrapper[4895]: E1206 09:58:54.052897 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:59:06 crc kubenswrapper[4895]: I1206 09:59:06.051225 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:59:06 crc kubenswrapper[4895]: E1206 09:59:06.052276 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.876655 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-blfkw"] Dec 06 09:59:18 crc kubenswrapper[4895]: E1206 09:59:18.877730 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8468c6-6c07-40e9-aa1b-996f099dffa8" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.877750 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8468c6-6c07-40e9-aa1b-996f099dffa8" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 06 09:59:18 crc kubenswrapper[4895]: E1206 09:59:18.877765 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="registry-server" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.877776 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="registry-server" Dec 06 09:59:18 crc kubenswrapper[4895]: E1206 09:59:18.877817 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="extract-content" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.877826 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="extract-content" Dec 06 09:59:18 crc kubenswrapper[4895]: E1206 09:59:18.877861 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="extract-utilities" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.877870 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="extract-utilities" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.879783 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8468c6-6c07-40e9-aa1b-996f099dffa8" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.879866 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26cd5d7-9aab-4e66-9d26-f5eac7c8d2df" containerName="registry-server" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.882034 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:18 crc kubenswrapper[4895]: I1206 09:59:18.890024 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blfkw"] Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.072773 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-catalog-content\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.072865 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-utilities\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.072922 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fxn\" (UniqueName: \"kubernetes.io/projected/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-kube-api-access-x7fxn\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.176872 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-catalog-content\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.177048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-utilities\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.177160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fxn\" (UniqueName: \"kubernetes.io/projected/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-kube-api-access-x7fxn\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.180168 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-utilities\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.180428 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-catalog-content\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.229705 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fxn\" (UniqueName: \"kubernetes.io/projected/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-kube-api-access-x7fxn\") pod \"certified-operators-blfkw\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.514069 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:19 crc kubenswrapper[4895]: I1206 09:59:19.975748 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blfkw"] Dec 06 09:59:20 crc kubenswrapper[4895]: I1206 09:59:20.097849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerStarted","Data":"ebf3293b16b95cfdc5d84c64790560b2f21bcb406a6f534f2f061bc01f284589"} Dec 06 09:59:21 crc kubenswrapper[4895]: I1206 09:59:21.051445 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:59:21 crc kubenswrapper[4895]: E1206 09:59:21.051924 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:59:21 crc kubenswrapper[4895]: I1206 09:59:21.110977 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerID="30e0a1523d7d771d28fe6b303a7f20f92030cbe898abfe5936b6df627061943f" exitCode=0 Dec 06 09:59:21 crc kubenswrapper[4895]: I1206 09:59:21.111024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerDied","Data":"30e0a1523d7d771d28fe6b303a7f20f92030cbe898abfe5936b6df627061943f"} Dec 06 09:59:21 crc kubenswrapper[4895]: I1206 09:59:21.112827 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 09:59:22 crc kubenswrapper[4895]: I1206 09:59:22.122636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerStarted","Data":"5e400a9e6f3c14d0abdb1fb3a70cd4181ef28e115785cf7d2d47f04639ea80c4"} Dec 06 09:59:23 crc kubenswrapper[4895]: I1206 09:59:23.145223 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerID="5e400a9e6f3c14d0abdb1fb3a70cd4181ef28e115785cf7d2d47f04639ea80c4" exitCode=0 Dec 06 09:59:23 crc kubenswrapper[4895]: I1206 09:59:23.145357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerDied","Data":"5e400a9e6f3c14d0abdb1fb3a70cd4181ef28e115785cf7d2d47f04639ea80c4"} Dec 06 09:59:24 crc kubenswrapper[4895]: I1206 09:59:24.166388 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerStarted","Data":"3409c563ffe1d7c91cff52fb29767b370a211604c55da0bb8255bf81f4d00d57"} Dec 06 09:59:24 crc kubenswrapper[4895]: I1206 09:59:24.202021 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-blfkw" podStartSLOduration=3.653334685 podStartE2EDuration="6.201997133s" podCreationTimestamp="2025-12-06 09:59:18 +0000 UTC" firstStartedPulling="2025-12-06 09:59:21.112539018 +0000 UTC m=+10923.513927888" lastFinishedPulling="2025-12-06 09:59:23.661201456 +0000 UTC m=+10926.062590336" observedRunningTime="2025-12-06 09:59:24.190060409 +0000 UTC m=+10926.591449289" watchObservedRunningTime="2025-12-06 09:59:24.201997133 +0000 UTC m=+10926.603386003" Dec 06 09:59:29 crc kubenswrapper[4895]: I1206 09:59:29.514289 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:29 crc kubenswrapper[4895]: I1206 09:59:29.515024 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:29 crc kubenswrapper[4895]: I1206 09:59:29.585898 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:30 crc kubenswrapper[4895]: I1206 09:59:30.299775 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:30 crc kubenswrapper[4895]: I1206 09:59:30.363560 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blfkw"] Dec 06 09:59:32 crc kubenswrapper[4895]: I1206 09:59:32.050851 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:59:32 crc kubenswrapper[4895]: E1206 09:59:32.051670 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:59:32 crc kubenswrapper[4895]: I1206 09:59:32.258997 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-blfkw" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="registry-server" containerID="cri-o://3409c563ffe1d7c91cff52fb29767b370a211604c55da0bb8255bf81f4d00d57" gracePeriod=2 Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.272434 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerID="3409c563ffe1d7c91cff52fb29767b370a211604c55da0bb8255bf81f4d00d57" exitCode=0 Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.272605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerDied","Data":"3409c563ffe1d7c91cff52fb29767b370a211604c55da0bb8255bf81f4d00d57"} Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.845264 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.948994 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7fxn\" (UniqueName: \"kubernetes.io/projected/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-kube-api-access-x7fxn\") pod \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.949079 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-catalog-content\") pod \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.949115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-utilities\") pod \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\" (UID: \"f1cf7df5-a70e-484d-aae1-b04881e8f1fc\") " Dec 06 09:59:33 crc kubenswrapper[4895]: I1206 09:59:33.950819 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-utilities" (OuterVolumeSpecName: "utilities") pod "f1cf7df5-a70e-484d-aae1-b04881e8f1fc" (UID: "f1cf7df5-a70e-484d-aae1-b04881e8f1fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.040752 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-kube-api-access-x7fxn" (OuterVolumeSpecName: "kube-api-access-x7fxn") pod "f1cf7df5-a70e-484d-aae1-b04881e8f1fc" (UID: "f1cf7df5-a70e-484d-aae1-b04881e8f1fc"). InnerVolumeSpecName "kube-api-access-x7fxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.055922 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.055946 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7fxn\" (UniqueName: \"kubernetes.io/projected/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-kube-api-access-x7fxn\") on node \"crc\" DevicePath \"\"" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.200012 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1cf7df5-a70e-484d-aae1-b04881e8f1fc" (UID: "f1cf7df5-a70e-484d-aae1-b04881e8f1fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.262170 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf7df5-a70e-484d-aae1-b04881e8f1fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.284653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blfkw" event={"ID":"f1cf7df5-a70e-484d-aae1-b04881e8f1fc","Type":"ContainerDied","Data":"ebf3293b16b95cfdc5d84c64790560b2f21bcb406a6f534f2f061bc01f284589"} Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.284703 4895 scope.go:117] "RemoveContainer" containerID="3409c563ffe1d7c91cff52fb29767b370a211604c55da0bb8255bf81f4d00d57" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.284724 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blfkw" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.335650 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blfkw"] Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.342903 4895 scope.go:117] "RemoveContainer" containerID="5e400a9e6f3c14d0abdb1fb3a70cd4181ef28e115785cf7d2d47f04639ea80c4" Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.356660 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-blfkw"] Dec 06 09:59:34 crc kubenswrapper[4895]: I1206 09:59:34.365516 4895 scope.go:117] "RemoveContainer" containerID="30e0a1523d7d771d28fe6b303a7f20f92030cbe898abfe5936b6df627061943f" Dec 06 09:59:36 crc kubenswrapper[4895]: I1206 09:59:36.072617 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" path="/var/lib/kubelet/pods/f1cf7df5-a70e-484d-aae1-b04881e8f1fc/volumes" Dec 06 09:59:43 crc kubenswrapper[4895]: I1206 09:59:43.050837 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:59:43 crc kubenswrapper[4895]: E1206 09:59:43.053254 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 09:59:56 crc kubenswrapper[4895]: I1206 09:59:56.869033 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Dec 06 09:59:56 crc kubenswrapper[4895]: I1206 09:59:56.869988 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="b60168b3-093c-4593-aa01-4840a4a50963" containerName="adoption" containerID="cri-o://67e5925c439ebd7fa889168ad2c63edb5f1d86923e2f33362a0f615fdba0cc16" gracePeriod=30 Dec 06 09:59:57 crc kubenswrapper[4895]: I1206 09:59:57.051441 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 09:59:57 crc kubenswrapper[4895]: E1206 09:59:57.052201 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.176980 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k"] Dec 06 10:00:00 crc kubenswrapper[4895]: E1206 10:00:00.178164 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="extract-utilities" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.178199 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="extract-utilities" Dec 06 10:00:00 crc kubenswrapper[4895]: E1206 10:00:00.178240 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="extract-content" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.178254 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="extract-content" Dec 06 10:00:00 crc kubenswrapper[4895]: E1206 10:00:00.178279 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="registry-server" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.178289 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="registry-server" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.178694 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cf7df5-a70e-484d-aae1-b04881e8f1fc" containerName="registry-server" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.179891 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.182302 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.182400 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.199116 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k"] Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.272158 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqb7q\" (UniqueName: \"kubernetes.io/projected/85af027c-5f8a-4878-b931-f62903168109-kube-api-access-xqb7q\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.272518 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85af027c-5f8a-4878-b931-f62903168109-secret-volume\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.272610 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85af027c-5f8a-4878-b931-f62903168109-config-volume\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.374591 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqb7q\" (UniqueName: \"kubernetes.io/projected/85af027c-5f8a-4878-b931-f62903168109-kube-api-access-xqb7q\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.374641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85af027c-5f8a-4878-b931-f62903168109-secret-volume\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.374694 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85af027c-5f8a-4878-b931-f62903168109-config-volume\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.375686 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85af027c-5f8a-4878-b931-f62903168109-config-volume\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.381602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85af027c-5f8a-4878-b931-f62903168109-secret-volume\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.392404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqb7q\" (UniqueName: \"kubernetes.io/projected/85af027c-5f8a-4878-b931-f62903168109-kube-api-access-xqb7q\") pod \"collect-profiles-29416920-67p5k\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:00 crc kubenswrapper[4895]: I1206 10:00:00.518947 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:01 crc kubenswrapper[4895]: I1206 10:00:01.001153 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k"] Dec 06 10:00:01 crc kubenswrapper[4895]: I1206 10:00:01.639724 4895 generic.go:334] "Generic (PLEG): container finished" podID="85af027c-5f8a-4878-b931-f62903168109" containerID="fc029cd7b992c4ad0600ee532f6a41e4c02d52391c3254c46e013dab981cec3a" exitCode=0 Dec 06 10:00:01 crc kubenswrapper[4895]: I1206 10:00:01.639785 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" event={"ID":"85af027c-5f8a-4878-b931-f62903168109","Type":"ContainerDied","Data":"fc029cd7b992c4ad0600ee532f6a41e4c02d52391c3254c46e013dab981cec3a"} Dec 06 10:00:01 crc kubenswrapper[4895]: I1206 10:00:01.640046 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" event={"ID":"85af027c-5f8a-4878-b931-f62903168109","Type":"ContainerStarted","Data":"dbc9215a9f12cfffafd4dd7531af6ffd6e1ea9006559ba2ba7eac032dcd50b0b"} Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.078399 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.126638 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqb7q\" (UniqueName: \"kubernetes.io/projected/85af027c-5f8a-4878-b931-f62903168109-kube-api-access-xqb7q\") pod \"85af027c-5f8a-4878-b931-f62903168109\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.128152 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85af027c-5f8a-4878-b931-f62903168109-config-volume\") pod \"85af027c-5f8a-4878-b931-f62903168109\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.128330 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85af027c-5f8a-4878-b931-f62903168109-secret-volume\") pod \"85af027c-5f8a-4878-b931-f62903168109\" (UID: \"85af027c-5f8a-4878-b931-f62903168109\") " Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.128739 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85af027c-5f8a-4878-b931-f62903168109-config-volume" (OuterVolumeSpecName: "config-volume") pod "85af027c-5f8a-4878-b931-f62903168109" (UID: "85af027c-5f8a-4878-b931-f62903168109"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.129394 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85af027c-5f8a-4878-b931-f62903168109-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.132084 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85af027c-5f8a-4878-b931-f62903168109-kube-api-access-xqb7q" (OuterVolumeSpecName: "kube-api-access-xqb7q") pod "85af027c-5f8a-4878-b931-f62903168109" (UID: "85af027c-5f8a-4878-b931-f62903168109"). InnerVolumeSpecName "kube-api-access-xqb7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.133547 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85af027c-5f8a-4878-b931-f62903168109-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85af027c-5f8a-4878-b931-f62903168109" (UID: "85af027c-5f8a-4878-b931-f62903168109"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.231025 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85af027c-5f8a-4878-b931-f62903168109-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.231064 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqb7q\" (UniqueName: \"kubernetes.io/projected/85af027c-5f8a-4878-b931-f62903168109-kube-api-access-xqb7q\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.666106 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" event={"ID":"85af027c-5f8a-4878-b931-f62903168109","Type":"ContainerDied","Data":"dbc9215a9f12cfffafd4dd7531af6ffd6e1ea9006559ba2ba7eac032dcd50b0b"} Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.666189 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc9215a9f12cfffafd4dd7531af6ffd6e1ea9006559ba2ba7eac032dcd50b0b" Dec 06 10:00:03 crc kubenswrapper[4895]: I1206 10:00:03.666302 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k" Dec 06 10:00:04 crc kubenswrapper[4895]: I1206 10:00:04.182947 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g"] Dec 06 10:00:04 crc kubenswrapper[4895]: I1206 10:00:04.201215 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416875-2j56g"] Dec 06 10:00:06 crc kubenswrapper[4895]: I1206 10:00:06.062635 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85da892-c08c-4b71-82fe-86dc94d0e837" path="/var/lib/kubelet/pods/c85da892-c08c-4b71-82fe-86dc94d0e837/volumes" Dec 06 10:00:08 crc kubenswrapper[4895]: I1206 10:00:08.065136 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:00:08 crc kubenswrapper[4895]: E1206 10:00:08.066274 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:00:21 crc kubenswrapper[4895]: I1206 10:00:21.051857 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:00:21 crc kubenswrapper[4895]: E1206 10:00:21.052740 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:00:21 crc kubenswrapper[4895]: I1206 10:00:21.984613 4895 scope.go:117] "RemoveContainer" containerID="67481ac33953021457144827293631ae4bfa12324f3209e9c0aa52ad5949bbdb" Dec 06 10:00:26 crc kubenswrapper[4895]: I1206 10:00:26.959743 4895 generic.go:334] "Generic (PLEG): container finished" podID="b60168b3-093c-4593-aa01-4840a4a50963" containerID="67e5925c439ebd7fa889168ad2c63edb5f1d86923e2f33362a0f615fdba0cc16" exitCode=137 Dec 06 10:00:26 crc kubenswrapper[4895]: I1206 10:00:26.959893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b60168b3-093c-4593-aa01-4840a4a50963","Type":"ContainerDied","Data":"67e5925c439ebd7fa889168ad2c63edb5f1d86923e2f33362a0f615fdba0cc16"} Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.470991 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.598236 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vw2\" (UniqueName: \"kubernetes.io/projected/b60168b3-093c-4593-aa01-4840a4a50963-kube-api-access-l8vw2\") pod \"b60168b3-093c-4593-aa01-4840a4a50963\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.599109 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") pod \"b60168b3-093c-4593-aa01-4840a4a50963\" (UID: \"b60168b3-093c-4593-aa01-4840a4a50963\") " Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.615429 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60168b3-093c-4593-aa01-4840a4a50963-kube-api-access-l8vw2" (OuterVolumeSpecName: "kube-api-access-l8vw2") pod "b60168b3-093c-4593-aa01-4840a4a50963" (UID: "b60168b3-093c-4593-aa01-4840a4a50963"). InnerVolumeSpecName "kube-api-access-l8vw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.621006 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba" (OuterVolumeSpecName: "mariadb-data") pod "b60168b3-093c-4593-aa01-4840a4a50963" (UID: "b60168b3-093c-4593-aa01-4840a4a50963"). InnerVolumeSpecName "pvc-0b333292-9471-4959-b879-0cd798ecffba". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.701785 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0b333292-9471-4959-b879-0cd798ecffba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") on node \"crc\" " Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.701824 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vw2\" (UniqueName: \"kubernetes.io/projected/b60168b3-093c-4593-aa01-4840a4a50963-kube-api-access-l8vw2\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.734705 4895 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.735065 4895 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0b333292-9471-4959-b879-0cd798ecffba" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba") on node "crc" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.803998 4895 reconciler_common.go:293] "Volume detached for volume \"pvc-0b333292-9471-4959-b879-0cd798ecffba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b333292-9471-4959-b879-0cd798ecffba\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.971860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b60168b3-093c-4593-aa01-4840a4a50963","Type":"ContainerDied","Data":"6c8022644b5ebd3f6ef7720dabcf35447d806a8cb0ddb508387c34e02eaeb622"} Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.971929 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 06 10:00:27 crc kubenswrapper[4895]: I1206 10:00:27.972296 4895 scope.go:117] "RemoveContainer" containerID="67e5925c439ebd7fa889168ad2c63edb5f1d86923e2f33362a0f615fdba0cc16" Dec 06 10:00:28 crc kubenswrapper[4895]: I1206 10:00:28.021239 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Dec 06 10:00:28 crc kubenswrapper[4895]: I1206 10:00:28.031443 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Dec 06 10:00:28 crc kubenswrapper[4895]: I1206 10:00:28.063164 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60168b3-093c-4593-aa01-4840a4a50963" path="/var/lib/kubelet/pods/b60168b3-093c-4593-aa01-4840a4a50963/volumes" Dec 06 10:00:28 crc kubenswrapper[4895]: I1206 10:00:28.676332 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Dec 06 10:00:28 crc kubenswrapper[4895]: I1206 10:00:28.676752 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="9f09b933-ea93-487e-ad6b-1bba2855d42c" containerName="adoption" containerID="cri-o://dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495" gracePeriod=30 Dec 06 10:00:36 crc kubenswrapper[4895]: I1206 10:00:36.051205 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:00:36 crc kubenswrapper[4895]: E1206 10:00:36.052415 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:00:47 crc kubenswrapper[4895]: I1206 10:00:47.051701 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:00:47 crc kubenswrapper[4895]: E1206 10:00:47.052880 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.273570 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.385624 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/9f09b933-ea93-487e-ad6b-1bba2855d42c-ovn-data-cert\") pod \"9f09b933-ea93-487e-ad6b-1bba2855d42c\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.386211 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") pod \"9f09b933-ea93-487e-ad6b-1bba2855d42c\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.386395 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdz4\" (UniqueName: \"kubernetes.io/projected/9f09b933-ea93-487e-ad6b-1bba2855d42c-kube-api-access-9pdz4\") pod \"9f09b933-ea93-487e-ad6b-1bba2855d42c\" (UID: \"9f09b933-ea93-487e-ad6b-1bba2855d42c\") " Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.393520 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f09b933-ea93-487e-ad6b-1bba2855d42c-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "9f09b933-ea93-487e-ad6b-1bba2855d42c" (UID: "9f09b933-ea93-487e-ad6b-1bba2855d42c"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.395876 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f09b933-ea93-487e-ad6b-1bba2855d42c-kube-api-access-9pdz4" (OuterVolumeSpecName: "kube-api-access-9pdz4") pod "9f09b933-ea93-487e-ad6b-1bba2855d42c" (UID: "9f09b933-ea93-487e-ad6b-1bba2855d42c"). InnerVolumeSpecName "kube-api-access-9pdz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.398771 4895 generic.go:334] "Generic (PLEG): container finished" podID="9f09b933-ea93-487e-ad6b-1bba2855d42c" containerID="dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495" exitCode=137 Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.398840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"9f09b933-ea93-487e-ad6b-1bba2855d42c","Type":"ContainerDied","Data":"dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495"} Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.398876 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"9f09b933-ea93-487e-ad6b-1bba2855d42c","Type":"ContainerDied","Data":"0d9de8e3f8292a9eb560475484bfffea4363068a129731dc96c472e26495d9db"} Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.398899 4895 scope.go:117] "RemoveContainer" containerID="dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.399163 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.440112 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df" (OuterVolumeSpecName: "ovn-data") pod "9f09b933-ea93-487e-ad6b-1bba2855d42c" (UID: "9f09b933-ea93-487e-ad6b-1bba2855d42c"). InnerVolumeSpecName "pvc-e601da3b-66d3-4777-b13b-f706562ac1df". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.498056 4895 scope.go:117] "RemoveContainer" containerID="dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495" Dec 06 10:00:59 crc kubenswrapper[4895]: E1206 10:00:59.498494 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495\": container with ID starting with dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495 not found: ID does not exist" containerID="dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.498536 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495"} err="failed to get container status \"dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495\": rpc error: code = NotFound desc = could not find container \"dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495\": container with ID starting with dafeae8454b76f45df7091a5dc50b1c51461f1de6001c6ca5452b3b416864495 not found: ID does not exist" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.511851 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdz4\" (UniqueName: \"kubernetes.io/projected/9f09b933-ea93-487e-ad6b-1bba2855d42c-kube-api-access-9pdz4\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.511880 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/9f09b933-ea93-487e-ad6b-1bba2855d42c-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.511912 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e601da3b-66d3-4777-b13b-f706562ac1df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") on node \"crc\" " Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.538092 4895 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.538277 4895 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e601da3b-66d3-4777-b13b-f706562ac1df" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df") on node "crc" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.614287 4895 reconciler_common.go:293] "Volume detached for volume \"pvc-e601da3b-66d3-4777-b13b-f706562ac1df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e601da3b-66d3-4777-b13b-f706562ac1df\") on node \"crc\" DevicePath \"\"" Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.736208 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Dec 06 10:00:59 crc kubenswrapper[4895]: I1206 10:00:59.745492 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.050744 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:01:00 crc kubenswrapper[4895]: E1206 10:01:00.051065 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.063184 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f09b933-ea93-487e-ad6b-1bba2855d42c" path="/var/lib/kubelet/pods/9f09b933-ea93-487e-ad6b-1bba2855d42c/volumes" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176114 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416921-v5lvt"] Dec 06 10:01:00 crc kubenswrapper[4895]: E1206 10:01:00.176558 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60168b3-093c-4593-aa01-4840a4a50963" containerName="adoption" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176574 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60168b3-093c-4593-aa01-4840a4a50963" containerName="adoption" Dec 06 10:01:00 crc kubenswrapper[4895]: E1206 10:01:00.176582 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f09b933-ea93-487e-ad6b-1bba2855d42c" containerName="adoption" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176588 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f09b933-ea93-487e-ad6b-1bba2855d42c" containerName="adoption" Dec 06 10:01:00 crc kubenswrapper[4895]: E1206 10:01:00.176620 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85af027c-5f8a-4878-b931-f62903168109" containerName="collect-profiles" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176626 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="85af027c-5f8a-4878-b931-f62903168109" containerName="collect-profiles" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176818 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60168b3-093c-4593-aa01-4840a4a50963" containerName="adoption" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176832 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="85af027c-5f8a-4878-b931-f62903168109" containerName="collect-profiles" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.176850 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f09b933-ea93-487e-ad6b-1bba2855d42c" containerName="adoption" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.177739 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.192697 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416921-v5lvt"] Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.328974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqw9\" (UniqueName: \"kubernetes.io/projected/61170183-aae8-4399-957f-1f5f07320807-kube-api-access-qbqw9\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.329243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-fernet-keys\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.329325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-config-data\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.329363 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-combined-ca-bundle\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.431214 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbqw9\" (UniqueName: \"kubernetes.io/projected/61170183-aae8-4399-957f-1f5f07320807-kube-api-access-qbqw9\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.431650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-fernet-keys\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.431890 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-config-data\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.431936 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-combined-ca-bundle\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.437204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-fernet-keys\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.450016 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-config-data\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.452586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-combined-ca-bundle\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.473723 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbqw9\" (UniqueName: \"kubernetes.io/projected/61170183-aae8-4399-957f-1f5f07320807-kube-api-access-qbqw9\") pod \"keystone-cron-29416921-v5lvt\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:00 crc kubenswrapper[4895]: I1206 10:01:00.513291 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:01 crc kubenswrapper[4895]: I1206 10:01:01.036031 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416921-v5lvt"] Dec 06 10:01:01 crc kubenswrapper[4895]: I1206 10:01:01.440704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416921-v5lvt" event={"ID":"61170183-aae8-4399-957f-1f5f07320807","Type":"ContainerStarted","Data":"833731ee30d79709cfa175b2a51dd133573732b9cb22fc50bf4f95a7971dcb43"} Dec 06 10:01:01 crc kubenswrapper[4895]: I1206 10:01:01.441077 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416921-v5lvt" event={"ID":"61170183-aae8-4399-957f-1f5f07320807","Type":"ContainerStarted","Data":"452df7273bcf05e04f5fbbf94e3d8eea0864b48431dd2da08e6a99cb60d82be0"} Dec 06 10:01:01 crc kubenswrapper[4895]: I1206 10:01:01.462860 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416921-v5lvt" podStartSLOduration=1.462831989 podStartE2EDuration="1.462831989s" podCreationTimestamp="2025-12-06 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 10:01:01.453611199 +0000 UTC m=+11023.855000069" watchObservedRunningTime="2025-12-06 10:01:01.462831989 +0000 UTC m=+11023.864220889" Dec 06 10:01:03 crc kubenswrapper[4895]: I1206 10:01:03.466679 4895 generic.go:334] "Generic (PLEG): container finished" podID="61170183-aae8-4399-957f-1f5f07320807" containerID="833731ee30d79709cfa175b2a51dd133573732b9cb22fc50bf4f95a7971dcb43" exitCode=0 Dec 06 10:01:03 crc kubenswrapper[4895]: I1206 10:01:03.466808 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416921-v5lvt" event={"ID":"61170183-aae8-4399-957f-1f5f07320807","Type":"ContainerDied","Data":"833731ee30d79709cfa175b2a51dd133573732b9cb22fc50bf4f95a7971dcb43"} Dec 06 10:01:04 crc kubenswrapper[4895]: I1206 10:01:04.900313 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.032068 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-fernet-keys\") pod \"61170183-aae8-4399-957f-1f5f07320807\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.032575 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbqw9\" (UniqueName: \"kubernetes.io/projected/61170183-aae8-4399-957f-1f5f07320807-kube-api-access-qbqw9\") pod \"61170183-aae8-4399-957f-1f5f07320807\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.032961 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-combined-ca-bundle\") pod \"61170183-aae8-4399-957f-1f5f07320807\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.033186 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-config-data\") pod \"61170183-aae8-4399-957f-1f5f07320807\" (UID: \"61170183-aae8-4399-957f-1f5f07320807\") " Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.039614 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "61170183-aae8-4399-957f-1f5f07320807" (UID: "61170183-aae8-4399-957f-1f5f07320807"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.045276 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61170183-aae8-4399-957f-1f5f07320807-kube-api-access-qbqw9" (OuterVolumeSpecName: "kube-api-access-qbqw9") pod "61170183-aae8-4399-957f-1f5f07320807" (UID: "61170183-aae8-4399-957f-1f5f07320807"). InnerVolumeSpecName "kube-api-access-qbqw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.068889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61170183-aae8-4399-957f-1f5f07320807" (UID: "61170183-aae8-4399-957f-1f5f07320807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.123687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-config-data" (OuterVolumeSpecName: "config-data") pod "61170183-aae8-4399-957f-1f5f07320807" (UID: "61170183-aae8-4399-957f-1f5f07320807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.135884 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.135920 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbqw9\" (UniqueName: \"kubernetes.io/projected/61170183-aae8-4399-957f-1f5f07320807-kube-api-access-qbqw9\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.135933 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.135945 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61170183-aae8-4399-957f-1f5f07320807-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.495140 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416921-v5lvt" event={"ID":"61170183-aae8-4399-957f-1f5f07320807","Type":"ContainerDied","Data":"452df7273bcf05e04f5fbbf94e3d8eea0864b48431dd2da08e6a99cb60d82be0"} Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.495184 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452df7273bcf05e04f5fbbf94e3d8eea0864b48431dd2da08e6a99cb60d82be0" Dec 06 10:01:05 crc kubenswrapper[4895]: I1206 10:01:05.495229 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416921-v5lvt" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.359718 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hj6k"] Dec 06 10:01:14 crc kubenswrapper[4895]: E1206 10:01:14.360769 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61170183-aae8-4399-957f-1f5f07320807" containerName="keystone-cron" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.360785 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="61170183-aae8-4399-957f-1f5f07320807" containerName="keystone-cron" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.361103 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="61170183-aae8-4399-957f-1f5f07320807" containerName="keystone-cron" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.363019 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.385800 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hj6k"] Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.451097 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslnt\" (UniqueName: \"kubernetes.io/projected/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-kube-api-access-vslnt\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.451277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-catalog-content\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.451340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-utilities\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.553707 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-catalog-content\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.555061 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-utilities\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.555182 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslnt\" (UniqueName: \"kubernetes.io/projected/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-kube-api-access-vslnt\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.554937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-catalog-content\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.555854 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-utilities\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.579718 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslnt\" (UniqueName: \"kubernetes.io/projected/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-kube-api-access-vslnt\") pod \"community-operators-9hj6k\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:14 crc kubenswrapper[4895]: I1206 10:01:14.728885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:15 crc kubenswrapper[4895]: I1206 10:01:15.051424 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:01:15 crc kubenswrapper[4895]: E1206 10:01:15.051639 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:01:15 crc kubenswrapper[4895]: I1206 10:01:15.233430 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hj6k"] Dec 06 10:01:15 crc kubenswrapper[4895]: I1206 10:01:15.621728 4895 generic.go:334] "Generic (PLEG): container finished" podID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerID="140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f" exitCode=0 Dec 06 10:01:15 crc kubenswrapper[4895]: I1206 10:01:15.622218 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerDied","Data":"140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f"} Dec 06 10:01:15 crc kubenswrapper[4895]: I1206 10:01:15.622268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerStarted","Data":"d9e3541f91bb654bcb5c3d78ff2d4289052e2ffc48ca0d58c7763fc174a9d7dc"} Dec 06 10:01:16 crc kubenswrapper[4895]: I1206 10:01:16.646837 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerStarted","Data":"8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e"} Dec 06 10:01:17 crc kubenswrapper[4895]: I1206 10:01:17.664558 4895 generic.go:334] "Generic (PLEG): container finished" podID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerID="8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e" exitCode=0 Dec 06 10:01:17 crc kubenswrapper[4895]: I1206 10:01:17.664636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerDied","Data":"8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e"} Dec 06 10:01:18 crc kubenswrapper[4895]: I1206 10:01:18.686379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerStarted","Data":"d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c"} Dec 06 10:01:18 crc kubenswrapper[4895]: I1206 10:01:18.720726 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hj6k" podStartSLOduration=2.265536658 podStartE2EDuration="4.720707044s" podCreationTimestamp="2025-12-06 10:01:14 +0000 UTC" firstStartedPulling="2025-12-06 10:01:15.624629199 +0000 UTC m=+11038.026018109" lastFinishedPulling="2025-12-06 10:01:18.079799625 +0000 UTC m=+11040.481188495" observedRunningTime="2025-12-06 10:01:18.711084112 +0000 UTC m=+11041.112472992" watchObservedRunningTime="2025-12-06 10:01:18.720707044 +0000 UTC m=+11041.122095914" Dec 06 10:01:24 crc kubenswrapper[4895]: I1206 10:01:24.730125 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:24 crc kubenswrapper[4895]: I1206 10:01:24.730789 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:24 crc kubenswrapper[4895]: I1206 10:01:24.823102 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:24 crc kubenswrapper[4895]: I1206 10:01:24.893545 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:25 crc kubenswrapper[4895]: I1206 10:01:25.062148 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hj6k"] Dec 06 10:01:26 crc kubenswrapper[4895]: I1206 10:01:26.783887 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hj6k" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="registry-server" containerID="cri-o://d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c" gracePeriod=2 Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.387252 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.464006 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-catalog-content\") pod \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.464251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-utilities\") pod \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.464279 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vslnt\" (UniqueName: \"kubernetes.io/projected/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-kube-api-access-vslnt\") pod \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\" (UID: \"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c\") " Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.465747 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-utilities" (OuterVolumeSpecName: "utilities") pod "b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" (UID: "b52bbeb3-7f02-45f9-a4d2-8697dedfc80c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.474840 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-kube-api-access-vslnt" (OuterVolumeSpecName: "kube-api-access-vslnt") pod "b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" (UID: "b52bbeb3-7f02-45f9-a4d2-8697dedfc80c"). InnerVolumeSpecName "kube-api-access-vslnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.512003 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" (UID: "b52bbeb3-7f02-45f9-a4d2-8697dedfc80c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.569520 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.569580 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.569602 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vslnt\" (UniqueName: \"kubernetes.io/projected/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c-kube-api-access-vslnt\") on node \"crc\" DevicePath \"\"" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.809785 4895 generic.go:334] "Generic (PLEG): container finished" podID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerID="d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c" exitCode=0 Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.809853 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerDied","Data":"d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c"} Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.809902 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hj6k" event={"ID":"b52bbeb3-7f02-45f9-a4d2-8697dedfc80c","Type":"ContainerDied","Data":"d9e3541f91bb654bcb5c3d78ff2d4289052e2ffc48ca0d58c7763fc174a9d7dc"} Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.809931 4895 scope.go:117] "RemoveContainer" containerID="d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.809933 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hj6k" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.855997 4895 scope.go:117] "RemoveContainer" containerID="8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.868919 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hj6k"] Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.881089 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hj6k"] Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.890236 4895 scope.go:117] "RemoveContainer" containerID="140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.934906 4895 scope.go:117] "RemoveContainer" containerID="d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c" Dec 06 10:01:27 crc kubenswrapper[4895]: E1206 10:01:27.935405 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c\": container with ID starting with d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c not found: ID does not exist" containerID="d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.935433 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c"} err="failed to get container status \"d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c\": rpc error: code = NotFound desc = could not find container \"d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c\": container with ID starting with d14c0bc56b1ce213ec26b36178569c6a84bdc7f6619108b7fd9329df1654be1c not found: ID does not exist" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.935451 4895 scope.go:117] "RemoveContainer" containerID="8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e" Dec 06 10:01:27 crc kubenswrapper[4895]: E1206 10:01:27.935935 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e\": container with ID starting with 8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e not found: ID does not exist" containerID="8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.935955 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e"} err="failed to get container status \"8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e\": rpc error: code = NotFound desc = could not find container \"8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e\": container with ID starting with 8e7554c93478b3ea9ab505b6885bb62a638a43d799467b9c1427a8184413606e not found: ID does not exist" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.935966 4895 scope.go:117] "RemoveContainer" containerID="140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f" Dec 06 10:01:27 crc kubenswrapper[4895]: E1206 10:01:27.936219 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f\": container with ID starting with 140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f not found: ID does not exist" containerID="140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f" Dec 06 10:01:27 crc kubenswrapper[4895]: I1206 10:01:27.936243 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f"} err="failed to get container status \"140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f\": rpc error: code = NotFound desc = could not find container \"140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f\": container with ID starting with 140977bfa86d05f08232704905d00d033391455eb299dde29290b85bc795842f not found: ID does not exist" Dec 06 10:01:28 crc kubenswrapper[4895]: I1206 10:01:28.064277 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" path="/var/lib/kubelet/pods/b52bbeb3-7f02-45f9-a4d2-8697dedfc80c/volumes" Dec 06 10:01:30 crc kubenswrapper[4895]: I1206 10:01:30.051805 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:01:30 crc kubenswrapper[4895]: E1206 10:01:30.053206 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:01:45 crc kubenswrapper[4895]: I1206 10:01:45.051765 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:01:45 crc kubenswrapper[4895]: E1206 10:01:45.052627 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:01:59 crc kubenswrapper[4895]: I1206 10:01:59.050608 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:01:59 crc kubenswrapper[4895]: E1206 10:01:59.051317 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:01:59 crc kubenswrapper[4895]: I1206 10:01:59.732746 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-zfwsz" podUID="69aac7da-152a-4314-92fd-1f4aea0140be" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 10:02:12 crc kubenswrapper[4895]: I1206 10:02:12.054621 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:02:12 crc kubenswrapper[4895]: E1206 10:02:12.055312 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:02:23 crc kubenswrapper[4895]: I1206 10:02:23.053874 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:02:23 crc kubenswrapper[4895]: E1206 10:02:23.054728 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:02:34 crc kubenswrapper[4895]: I1206 10:02:34.051060 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:02:34 crc kubenswrapper[4895]: E1206 10:02:34.052257 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:02:49 crc kubenswrapper[4895]: I1206 10:02:49.051788 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:02:49 crc kubenswrapper[4895]: E1206 10:02:49.052929 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:03:02 crc kubenswrapper[4895]: I1206 10:03:02.051060 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:03:02 crc kubenswrapper[4895]: E1206 10:03:02.051872 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:03:16 crc kubenswrapper[4895]: I1206 10:03:16.051955 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:03:16 crc kubenswrapper[4895]: E1206 10:03:16.053272 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:03:29 crc kubenswrapper[4895]: I1206 10:03:29.051872 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:03:29 crc kubenswrapper[4895]: E1206 10:03:29.052942 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.520906 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txb45"] Dec 06 10:03:43 crc kubenswrapper[4895]: E1206 10:03:43.523974 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="extract-content" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.524117 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="extract-content" Dec 06 10:03:43 crc kubenswrapper[4895]: E1206 10:03:43.524258 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="extract-utilities" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.524342 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="extract-utilities" Dec 06 10:03:43 crc kubenswrapper[4895]: E1206 10:03:43.524433 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="registry-server" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.524543 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="registry-server" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.524920 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52bbeb3-7f02-45f9-a4d2-8697dedfc80c" containerName="registry-server" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.527192 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.542280 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txb45"] Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.670676 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-catalog-content\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.671705 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-utilities\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.671895 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7ht\" (UniqueName: \"kubernetes.io/projected/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-kube-api-access-7s7ht\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.774150 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-utilities\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.774220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7ht\" (UniqueName: \"kubernetes.io/projected/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-kube-api-access-7s7ht\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.774302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-catalog-content\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.774804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-utilities\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.774852 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-catalog-content\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.816535 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7ht\" (UniqueName: \"kubernetes.io/projected/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-kube-api-access-7s7ht\") pod \"redhat-operators-txb45\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:43 crc kubenswrapper[4895]: I1206 10:03:43.851618 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:44 crc kubenswrapper[4895]: I1206 10:03:44.051612 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:03:44 crc kubenswrapper[4895]: I1206 10:03:44.314400 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txb45"] Dec 06 10:03:44 crc kubenswrapper[4895]: W1206 10:03:44.322791 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde74afa2_ccbc_4220_99c3_b3bfd0fba1c5.slice/crio-3d76a32fa1aa8347b1f672109142042b4b20f05528dd350347a9220ca4668a48 WatchSource:0}: Error finding container 3d76a32fa1aa8347b1f672109142042b4b20f05528dd350347a9220ca4668a48: Status 404 returned error can't find the container with id 3d76a32fa1aa8347b1f672109142042b4b20f05528dd350347a9220ca4668a48 Dec 06 10:03:44 crc kubenswrapper[4895]: I1206 10:03:44.655182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"ddd61a6f3cb744e0308de1a38b279e3ed49e944ef9f6e98444f7e803ced0b2ce"} Dec 06 10:03:44 crc kubenswrapper[4895]: I1206 10:03:44.658131 4895 generic.go:334] "Generic (PLEG): container finished" podID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerID="72b5ecf2f952b59e0280d31429da3fa24761a1383bebb81faba755bf4f243176" exitCode=0 Dec 06 10:03:44 crc kubenswrapper[4895]: I1206 10:03:44.658183 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerDied","Data":"72b5ecf2f952b59e0280d31429da3fa24761a1383bebb81faba755bf4f243176"} Dec 06 10:03:44 crc kubenswrapper[4895]: I1206 10:03:44.658203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerStarted","Data":"3d76a32fa1aa8347b1f672109142042b4b20f05528dd350347a9220ca4668a48"} Dec 06 10:03:45 crc kubenswrapper[4895]: I1206 10:03:45.673649 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerStarted","Data":"f47fdfe6ccbc21f37daf59b1677278ccf43bc62ce888ccaa48691636e574cd2b"} Dec 06 10:03:46 crc kubenswrapper[4895]: I1206 10:03:46.684462 4895 generic.go:334] "Generic (PLEG): container finished" podID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerID="f47fdfe6ccbc21f37daf59b1677278ccf43bc62ce888ccaa48691636e574cd2b" exitCode=0 Dec 06 10:03:46 crc kubenswrapper[4895]: I1206 10:03:46.684586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerDied","Data":"f47fdfe6ccbc21f37daf59b1677278ccf43bc62ce888ccaa48691636e574cd2b"} Dec 06 10:03:47 crc kubenswrapper[4895]: I1206 10:03:47.710702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerStarted","Data":"483f4756e6c327b23051b1641dc2d7635e945fbc0d02815defdcfdd74ef5fe6c"} Dec 06 10:03:47 crc kubenswrapper[4895]: I1206 10:03:47.739376 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txb45" podStartSLOduration=2.306685648 podStartE2EDuration="4.739334623s" podCreationTimestamp="2025-12-06 10:03:43 +0000 UTC" firstStartedPulling="2025-12-06 10:03:44.65955022 +0000 UTC m=+11187.060939100" lastFinishedPulling="2025-12-06 10:03:47.092199215 +0000 UTC m=+11189.493588075" observedRunningTime="2025-12-06 10:03:47.729445564 +0000 UTC m=+11190.130834444" watchObservedRunningTime="2025-12-06 10:03:47.739334623 +0000 UTC m=+11190.140723493" Dec 06 10:03:53 crc kubenswrapper[4895]: I1206 10:03:53.852320 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:53 crc kubenswrapper[4895]: I1206 10:03:53.852802 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:03:54 crc kubenswrapper[4895]: I1206 10:03:54.913248 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txb45" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="registry-server" probeResult="failure" output=< Dec 06 10:03:54 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 10:03:54 crc kubenswrapper[4895]: > Dec 06 10:04:03 crc kubenswrapper[4895]: I1206 10:04:03.921367 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:04:03 crc kubenswrapper[4895]: I1206 10:04:03.998767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:04:04 crc kubenswrapper[4895]: I1206 10:04:04.158289 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txb45"] Dec 06 10:04:05 crc kubenswrapper[4895]: I1206 10:04:05.931956 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txb45" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="registry-server" containerID="cri-o://483f4756e6c327b23051b1641dc2d7635e945fbc0d02815defdcfdd74ef5fe6c" gracePeriod=2 Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.395020 4895 generic.go:334] "Generic (PLEG): container finished" podID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerID="483f4756e6c327b23051b1641dc2d7635e945fbc0d02815defdcfdd74ef5fe6c" exitCode=0 Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.395176 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerDied","Data":"483f4756e6c327b23051b1641dc2d7635e945fbc0d02815defdcfdd74ef5fe6c"} Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.503292 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.590408 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-catalog-content\") pod \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.590522 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7ht\" (UniqueName: \"kubernetes.io/projected/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-kube-api-access-7s7ht\") pod \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.590894 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-utilities\") pod \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\" (UID: \"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5\") " Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.591839 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-utilities" (OuterVolumeSpecName: "utilities") pod "de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" (UID: "de74afa2-ccbc-4220-99c3-b3bfd0fba1c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.592496 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.601522 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-kube-api-access-7s7ht" (OuterVolumeSpecName: "kube-api-access-7s7ht") pod "de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" (UID: "de74afa2-ccbc-4220-99c3-b3bfd0fba1c5"). InnerVolumeSpecName "kube-api-access-7s7ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.695361 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7ht\" (UniqueName: \"kubernetes.io/projected/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-kube-api-access-7s7ht\") on node \"crc\" DevicePath \"\"" Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.752367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" (UID: "de74afa2-ccbc-4220-99c3-b3bfd0fba1c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:04:07 crc kubenswrapper[4895]: I1206 10:04:07.797725 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.412846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txb45" event={"ID":"de74afa2-ccbc-4220-99c3-b3bfd0fba1c5","Type":"ContainerDied","Data":"3d76a32fa1aa8347b1f672109142042b4b20f05528dd350347a9220ca4668a48"} Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.412956 4895 scope.go:117] "RemoveContainer" containerID="483f4756e6c327b23051b1641dc2d7635e945fbc0d02815defdcfdd74ef5fe6c" Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.413131 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txb45" Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.448105 4895 scope.go:117] "RemoveContainer" containerID="f47fdfe6ccbc21f37daf59b1677278ccf43bc62ce888ccaa48691636e574cd2b" Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.452347 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txb45"] Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.468752 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txb45"] Dec 06 10:04:08 crc kubenswrapper[4895]: I1206 10:04:08.481710 4895 scope.go:117] "RemoveContainer" containerID="72b5ecf2f952b59e0280d31429da3fa24761a1383bebb81faba755bf4f243176" Dec 06 10:04:10 crc kubenswrapper[4895]: I1206 10:04:10.064136 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" path="/var/lib/kubelet/pods/de74afa2-ccbc-4220-99c3-b3bfd0fba1c5/volumes" Dec 06 10:05:59 crc kubenswrapper[4895]: I1206 10:05:59.695948 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:05:59 crc kubenswrapper[4895]: I1206 10:05:59.696456 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:06:29 crc kubenswrapper[4895]: I1206 10:06:29.695728 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:06:29 crc kubenswrapper[4895]: I1206 10:06:29.696352 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.889211 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-972s7"] Dec 06 10:06:33 crc kubenswrapper[4895]: E1206 10:06:33.890440 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="extract-content" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.890459 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="extract-content" Dec 06 10:06:33 crc kubenswrapper[4895]: E1206 10:06:33.890506 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="extract-utilities" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.890516 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="extract-utilities" Dec 06 10:06:33 crc kubenswrapper[4895]: E1206 10:06:33.890555 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="registry-server" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.890565 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="registry-server" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.890842 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="de74afa2-ccbc-4220-99c3-b3bfd0fba1c5" containerName="registry-server" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.893020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.901454 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-972s7"] Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.971096 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-catalog-content\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.971176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-utilities\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:33 crc kubenswrapper[4895]: I1206 10:06:33.971292 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rb2\" (UniqueName: \"kubernetes.io/projected/2d83a5fa-7609-4f2d-b08b-548dbe572648-kube-api-access-22rb2\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.074058 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rb2\" (UniqueName: \"kubernetes.io/projected/2d83a5fa-7609-4f2d-b08b-548dbe572648-kube-api-access-22rb2\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.074578 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-catalog-content\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.074636 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-utilities\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.075128 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-catalog-content\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.075202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-utilities\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.098565 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rb2\" (UniqueName: \"kubernetes.io/projected/2d83a5fa-7609-4f2d-b08b-548dbe572648-kube-api-access-22rb2\") pod \"redhat-marketplace-972s7\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.226499 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:34 crc kubenswrapper[4895]: I1206 10:06:34.700013 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-972s7"] Dec 06 10:06:35 crc kubenswrapper[4895]: I1206 10:06:35.229927 4895 generic.go:334] "Generic (PLEG): container finished" podID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerID="a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a" exitCode=0 Dec 06 10:06:35 crc kubenswrapper[4895]: I1206 10:06:35.229985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-972s7" event={"ID":"2d83a5fa-7609-4f2d-b08b-548dbe572648","Type":"ContainerDied","Data":"a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a"} Dec 06 10:06:35 crc kubenswrapper[4895]: I1206 10:06:35.230279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-972s7" event={"ID":"2d83a5fa-7609-4f2d-b08b-548dbe572648","Type":"ContainerStarted","Data":"741c45429ea4a0af01326a66829109260f95d66fe79a7ed0121c3554303736f6"} Dec 06 10:06:35 crc kubenswrapper[4895]: I1206 10:06:35.233088 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:06:37 crc kubenswrapper[4895]: I1206 10:06:37.261089 4895 generic.go:334] "Generic (PLEG): container finished" podID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerID="e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3" exitCode=0 Dec 06 10:06:37 crc kubenswrapper[4895]: I1206 10:06:37.261216 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-972s7" event={"ID":"2d83a5fa-7609-4f2d-b08b-548dbe572648","Type":"ContainerDied","Data":"e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3"} Dec 06 10:06:38 crc kubenswrapper[4895]: I1206 10:06:38.280673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-972s7" event={"ID":"2d83a5fa-7609-4f2d-b08b-548dbe572648","Type":"ContainerStarted","Data":"bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b"} Dec 06 10:06:38 crc kubenswrapper[4895]: I1206 10:06:38.305233 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-972s7" podStartSLOduration=2.8189351499999997 podStartE2EDuration="5.305201062s" podCreationTimestamp="2025-12-06 10:06:33 +0000 UTC" firstStartedPulling="2025-12-06 10:06:35.23241405 +0000 UTC m=+11357.633802960" lastFinishedPulling="2025-12-06 10:06:37.718679962 +0000 UTC m=+11360.120068872" observedRunningTime="2025-12-06 10:06:38.29886455 +0000 UTC m=+11360.700253460" watchObservedRunningTime="2025-12-06 10:06:38.305201062 +0000 UTC m=+11360.706589972" Dec 06 10:06:44 crc kubenswrapper[4895]: I1206 10:06:44.226910 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:44 crc kubenswrapper[4895]: I1206 10:06:44.227428 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:44 crc kubenswrapper[4895]: I1206 10:06:44.301062 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:44 crc kubenswrapper[4895]: I1206 10:06:44.430132 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:44 crc kubenswrapper[4895]: I1206 10:06:44.545204 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-972s7"] Dec 06 10:06:46 crc kubenswrapper[4895]: I1206 10:06:46.380245 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-972s7" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="registry-server" containerID="cri-o://bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b" gracePeriod=2 Dec 06 10:06:46 crc kubenswrapper[4895]: I1206 10:06:46.917418 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.028116 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22rb2\" (UniqueName: \"kubernetes.io/projected/2d83a5fa-7609-4f2d-b08b-548dbe572648-kube-api-access-22rb2\") pod \"2d83a5fa-7609-4f2d-b08b-548dbe572648\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.028267 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-catalog-content\") pod \"2d83a5fa-7609-4f2d-b08b-548dbe572648\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.028377 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-utilities\") pod \"2d83a5fa-7609-4f2d-b08b-548dbe572648\" (UID: \"2d83a5fa-7609-4f2d-b08b-548dbe572648\") " Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.029550 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-utilities" (OuterVolumeSpecName: "utilities") pod "2d83a5fa-7609-4f2d-b08b-548dbe572648" (UID: "2d83a5fa-7609-4f2d-b08b-548dbe572648"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.034550 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d83a5fa-7609-4f2d-b08b-548dbe572648-kube-api-access-22rb2" (OuterVolumeSpecName: "kube-api-access-22rb2") pod "2d83a5fa-7609-4f2d-b08b-548dbe572648" (UID: "2d83a5fa-7609-4f2d-b08b-548dbe572648"). InnerVolumeSpecName "kube-api-access-22rb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.050696 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d83a5fa-7609-4f2d-b08b-548dbe572648" (UID: "2d83a5fa-7609-4f2d-b08b-548dbe572648"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.131597 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22rb2\" (UniqueName: \"kubernetes.io/projected/2d83a5fa-7609-4f2d-b08b-548dbe572648-kube-api-access-22rb2\") on node \"crc\" DevicePath \"\"" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.131629 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.131643 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83a5fa-7609-4f2d-b08b-548dbe572648-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.394822 4895 generic.go:334] "Generic (PLEG): container finished" podID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerID="bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b" exitCode=0 Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.394881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-972s7" event={"ID":"2d83a5fa-7609-4f2d-b08b-548dbe572648","Type":"ContainerDied","Data":"bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b"} Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.396170 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-972s7" event={"ID":"2d83a5fa-7609-4f2d-b08b-548dbe572648","Type":"ContainerDied","Data":"741c45429ea4a0af01326a66829109260f95d66fe79a7ed0121c3554303736f6"} Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.396264 4895 scope.go:117] "RemoveContainer" containerID="bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.394900 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-972s7" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.443829 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-972s7"] Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.445123 4895 scope.go:117] "RemoveContainer" containerID="e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.455370 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-972s7"] Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.470142 4895 scope.go:117] "RemoveContainer" containerID="a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.516364 4895 scope.go:117] "RemoveContainer" containerID="bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b" Dec 06 10:06:47 crc kubenswrapper[4895]: E1206 10:06:47.517194 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b\": container with ID starting with bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b not found: ID does not exist" containerID="bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.517251 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b"} err="failed to get container status \"bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b\": rpc error: code = NotFound desc = could not find container \"bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b\": container with ID starting with bc71ffe8e232a115ae3a32e505132eff9590568670a53661359234ed0cc9559b not found: ID does not exist" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.517276 4895 scope.go:117] "RemoveContainer" containerID="e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3" Dec 06 10:06:47 crc kubenswrapper[4895]: E1206 10:06:47.517755 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3\": container with ID starting with e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3 not found: ID does not exist" containerID="e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.517787 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3"} err="failed to get container status \"e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3\": rpc error: code = NotFound desc = could not find container \"e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3\": container with ID starting with e724a2bfec2a0757d878c6163df1c1c7315b79d7e5e54571fba758f15ab149a3 not found: ID does not exist" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.517803 4895 scope.go:117] "RemoveContainer" containerID="a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a" Dec 06 10:06:47 crc kubenswrapper[4895]: E1206 10:06:47.518126 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a\": container with ID starting with a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a not found: ID does not exist" containerID="a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a" Dec 06 10:06:47 crc kubenswrapper[4895]: I1206 10:06:47.518167 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a"} err="failed to get container status \"a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a\": rpc error: code = NotFound desc = could not find container \"a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a\": container with ID starting with a961bc77228ad4a4b2b8fa7f70c7547c26e89a4df8b9fc500854b9b7dadfab7a not found: ID does not exist" Dec 06 10:06:48 crc kubenswrapper[4895]: I1206 10:06:48.066916 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" path="/var/lib/kubelet/pods/2d83a5fa-7609-4f2d-b08b-548dbe572648/volumes" Dec 06 10:06:59 crc kubenswrapper[4895]: I1206 10:06:59.696138 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:06:59 crc kubenswrapper[4895]: I1206 10:06:59.697203 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:06:59 crc kubenswrapper[4895]: I1206 10:06:59.697270 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:06:59 crc kubenswrapper[4895]: I1206 10:06:59.698090 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ddd61a6f3cb744e0308de1a38b279e3ed49e944ef9f6e98444f7e803ced0b2ce"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:06:59 crc kubenswrapper[4895]: I1206 10:06:59.698166 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://ddd61a6f3cb744e0308de1a38b279e3ed49e944ef9f6e98444f7e803ced0b2ce" gracePeriod=600 Dec 06 10:07:00 crc kubenswrapper[4895]: I1206 10:07:00.562557 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="ddd61a6f3cb744e0308de1a38b279e3ed49e944ef9f6e98444f7e803ced0b2ce" exitCode=0 Dec 06 10:07:00 crc kubenswrapper[4895]: I1206 10:07:00.562636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"ddd61a6f3cb744e0308de1a38b279e3ed49e944ef9f6e98444f7e803ced0b2ce"} Dec 06 10:07:00 crc kubenswrapper[4895]: I1206 10:07:00.563245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4"} Dec 06 10:07:00 crc kubenswrapper[4895]: I1206 10:07:00.563279 4895 scope.go:117] "RemoveContainer" containerID="77ad5844f7ff124227fcc5d4be806ab87f5f9e78a4b3596ef0bd21f03ba08ab1" Dec 06 10:09:29 crc kubenswrapper[4895]: I1206 10:09:29.696237 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:09:29 crc kubenswrapper[4895]: I1206 10:09:29.696812 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:09:59 crc kubenswrapper[4895]: I1206 10:09:59.696018 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:09:59 crc kubenswrapper[4895]: I1206 10:09:59.696803 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.510332 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b552x"] Dec 06 10:10:12 crc kubenswrapper[4895]: E1206 10:10:12.512732 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="registry-server" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.512893 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="registry-server" Dec 06 10:10:12 crc kubenswrapper[4895]: E1206 10:10:12.513067 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="extract-content" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.513198 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="extract-content" Dec 06 10:10:12 crc kubenswrapper[4895]: E1206 10:10:12.513307 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="extract-utilities" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.513419 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="extract-utilities" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.513949 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d83a5fa-7609-4f2d-b08b-548dbe572648" containerName="registry-server" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.516551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.527086 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b552x"] Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.606446 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46kz\" (UniqueName: \"kubernetes.io/projected/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-kube-api-access-j46kz\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.606765 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-catalog-content\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.607041 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-utilities\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.708921 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-utilities\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.709305 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46kz\" (UniqueName: \"kubernetes.io/projected/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-kube-api-access-j46kz\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.709531 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-catalog-content\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.709710 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-utilities\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.713123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-catalog-content\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.733229 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46kz\" (UniqueName: \"kubernetes.io/projected/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-kube-api-access-j46kz\") pod \"certified-operators-b552x\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:12 crc kubenswrapper[4895]: I1206 10:10:12.856493 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:13 crc kubenswrapper[4895]: I1206 10:10:13.390309 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b552x"] Dec 06 10:10:14 crc kubenswrapper[4895]: I1206 10:10:14.147105 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerID="efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb" exitCode=0 Dec 06 10:10:14 crc kubenswrapper[4895]: I1206 10:10:14.147226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerDied","Data":"efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb"} Dec 06 10:10:14 crc kubenswrapper[4895]: I1206 10:10:14.147507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerStarted","Data":"e847aa53c6d2dc16fd50143738ecda9dfe1d434aaca3b06b88f51095f3d31890"} Dec 06 10:10:15 crc kubenswrapper[4895]: I1206 10:10:15.186070 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerStarted","Data":"539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d"} Dec 06 10:10:16 crc kubenswrapper[4895]: I1206 10:10:16.201694 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerID="539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d" exitCode=0 Dec 06 10:10:16 crc kubenswrapper[4895]: I1206 10:10:16.201753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerDied","Data":"539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d"} Dec 06 10:10:17 crc kubenswrapper[4895]: I1206 10:10:17.214666 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerStarted","Data":"d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65"} Dec 06 10:10:17 crc kubenswrapper[4895]: I1206 10:10:17.234051 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b552x" podStartSLOduration=2.802635101 podStartE2EDuration="5.234020511s" podCreationTimestamp="2025-12-06 10:10:12 +0000 UTC" firstStartedPulling="2025-12-06 10:10:14.15171091 +0000 UTC m=+11576.553099790" lastFinishedPulling="2025-12-06 10:10:16.58309633 +0000 UTC m=+11578.984485200" observedRunningTime="2025-12-06 10:10:17.231332738 +0000 UTC m=+11579.632721608" watchObservedRunningTime="2025-12-06 10:10:17.234020511 +0000 UTC m=+11579.635409371" Dec 06 10:10:22 crc kubenswrapper[4895]: I1206 10:10:22.857269 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:22 crc kubenswrapper[4895]: I1206 10:10:22.858078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:22 crc kubenswrapper[4895]: I1206 10:10:22.922291 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:23 crc kubenswrapper[4895]: I1206 10:10:23.335720 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:23 crc kubenswrapper[4895]: I1206 10:10:23.396752 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b552x"] Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.308678 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b552x" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="registry-server" containerID="cri-o://d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65" gracePeriod=2 Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.820711 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.913253 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-catalog-content\") pod \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.913842 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46kz\" (UniqueName: \"kubernetes.io/projected/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-kube-api-access-j46kz\") pod \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.914897 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-utilities\") pod \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\" (UID: \"ae2dc6e4-181c-47fe-8520-f055ad8c4d87\") " Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.915464 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-utilities" (OuterVolumeSpecName: "utilities") pod "ae2dc6e4-181c-47fe-8520-f055ad8c4d87" (UID: "ae2dc6e4-181c-47fe-8520-f055ad8c4d87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.915836 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:10:25 crc kubenswrapper[4895]: I1206 10:10:25.928058 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-kube-api-access-j46kz" (OuterVolumeSpecName: "kube-api-access-j46kz") pod "ae2dc6e4-181c-47fe-8520-f055ad8c4d87" (UID: "ae2dc6e4-181c-47fe-8520-f055ad8c4d87"). InnerVolumeSpecName "kube-api-access-j46kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.017213 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46kz\" (UniqueName: \"kubernetes.io/projected/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-kube-api-access-j46kz\") on node \"crc\" DevicePath \"\"" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.136158 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae2dc6e4-181c-47fe-8520-f055ad8c4d87" (UID: "ae2dc6e4-181c-47fe-8520-f055ad8c4d87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.221346 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2dc6e4-181c-47fe-8520-f055ad8c4d87-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.324449 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerID="d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65" exitCode=0 Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.324551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerDied","Data":"d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65"} Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.324572 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b552x" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.324586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b552x" event={"ID":"ae2dc6e4-181c-47fe-8520-f055ad8c4d87","Type":"ContainerDied","Data":"e847aa53c6d2dc16fd50143738ecda9dfe1d434aaca3b06b88f51095f3d31890"} Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.324607 4895 scope.go:117] "RemoveContainer" containerID="d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.358571 4895 scope.go:117] "RemoveContainer" containerID="539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.384459 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b552x"] Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.393844 4895 scope.go:117] "RemoveContainer" containerID="efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.398237 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b552x"] Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.449854 4895 scope.go:117] "RemoveContainer" containerID="d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65" Dec 06 10:10:26 crc kubenswrapper[4895]: E1206 10:10:26.450454 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65\": container with ID starting with d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65 not found: ID does not exist" containerID="d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.450559 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65"} err="failed to get container status \"d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65\": rpc error: code = NotFound desc = could not find container \"d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65\": container with ID starting with d838a13a388888c63544e9d484db991198e59f2742db4dd61d00ad7693d9bc65 not found: ID does not exist" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.450591 4895 scope.go:117] "RemoveContainer" containerID="539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d" Dec 06 10:10:26 crc kubenswrapper[4895]: E1206 10:10:26.451089 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d\": container with ID starting with 539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d not found: ID does not exist" containerID="539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.451123 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d"} err="failed to get container status \"539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d\": rpc error: code = NotFound desc = could not find container \"539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d\": container with ID starting with 539d16a80bc51a32f93adef26138be9256421d429dcbf91c73b17753ea7a2c1d not found: ID does not exist" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.451142 4895 scope.go:117] "RemoveContainer" containerID="efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb" Dec 06 10:10:26 crc kubenswrapper[4895]: E1206 10:10:26.451704 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb\": container with ID starting with efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb not found: ID does not exist" containerID="efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb" Dec 06 10:10:26 crc kubenswrapper[4895]: I1206 10:10:26.451743 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb"} err="failed to get container status \"efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb\": rpc error: code = NotFound desc = could not find container \"efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb\": container with ID starting with efee522b9e8898824ecc5b24cb125f38526b210834a72732edd2b2f9f4dbccfb not found: ID does not exist" Dec 06 10:10:28 crc kubenswrapper[4895]: I1206 10:10:28.066117 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" path="/var/lib/kubelet/pods/ae2dc6e4-181c-47fe-8520-f055ad8c4d87/volumes" Dec 06 10:10:29 crc kubenswrapper[4895]: I1206 10:10:29.695626 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:10:29 crc kubenswrapper[4895]: I1206 10:10:29.695940 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:10:29 crc kubenswrapper[4895]: I1206 10:10:29.695985 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:10:29 crc kubenswrapper[4895]: I1206 10:10:29.696855 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:10:29 crc kubenswrapper[4895]: I1206 10:10:29.696918 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" gracePeriod=600 Dec 06 10:10:29 crc kubenswrapper[4895]: E1206 10:10:29.853951 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:10:30 crc kubenswrapper[4895]: I1206 10:10:30.378988 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" exitCode=0 Dec 06 10:10:30 crc kubenswrapper[4895]: I1206 10:10:30.379063 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4"} Dec 06 10:10:30 crc kubenswrapper[4895]: I1206 10:10:30.379130 4895 scope.go:117] "RemoveContainer" containerID="ddd61a6f3cb744e0308de1a38b279e3ed49e944ef9f6e98444f7e803ced0b2ce" Dec 06 10:10:30 crc kubenswrapper[4895]: I1206 10:10:30.380327 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:10:30 crc kubenswrapper[4895]: E1206 10:10:30.381031 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:10:46 crc kubenswrapper[4895]: I1206 10:10:46.052040 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:10:46 crc kubenswrapper[4895]: E1206 10:10:46.053365 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:11:01 crc kubenswrapper[4895]: I1206 10:11:01.052195 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:11:01 crc kubenswrapper[4895]: E1206 10:11:01.053190 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:11:14 crc kubenswrapper[4895]: I1206 10:11:14.051674 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:11:14 crc kubenswrapper[4895]: E1206 10:11:14.052788 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:11:27 crc kubenswrapper[4895]: I1206 10:11:27.051077 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:11:27 crc kubenswrapper[4895]: E1206 10:11:27.051852 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:11:38 crc kubenswrapper[4895]: I1206 10:11:38.067266 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:11:38 crc kubenswrapper[4895]: E1206 10:11:38.076075 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:11:53 crc kubenswrapper[4895]: I1206 10:11:53.051625 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:11:53 crc kubenswrapper[4895]: E1206 10:11:53.052828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.320001 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 10:11:56 crc kubenswrapper[4895]: E1206 10:11:56.321466 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="registry-server" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.321523 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="registry-server" Dec 06 10:11:56 crc kubenswrapper[4895]: E1206 10:11:56.321544 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="extract-content" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.321556 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="extract-content" Dec 06 10:11:56 crc kubenswrapper[4895]: E1206 10:11:56.321592 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="extract-utilities" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.321605 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="extract-utilities" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.321978 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2dc6e4-181c-47fe-8520-f055ad8c4d87" containerName="registry-server" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.323229 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.326520 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.326700 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.326998 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.327051 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-z2v77" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.361031 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdqs\" (UniqueName: \"kubernetes.io/projected/6eaee9a4-ba6d-4285-823c-f90a59785cc6-kube-api-access-lpdqs\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454283 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454510 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-config-data\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.454944 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.455018 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.455274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.557719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.557795 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.557879 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.557921 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-config-data\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.557964 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.558083 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.558126 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.558216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.558313 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdqs\" (UniqueName: \"kubernetes.io/projected/6eaee9a4-ba6d-4285-823c-f90a59785cc6-kube-api-access-lpdqs\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.558788 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.558890 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.559669 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.561031 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.562173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-config-data\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.565191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.569223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.570578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.576027 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdqs\" (UniqueName: \"kubernetes.io/projected/6eaee9a4-ba6d-4285-823c-f90a59785cc6-kube-api-access-lpdqs\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.593887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " pod="openstack/tempest-tests-tempest" Dec 06 10:11:56 crc kubenswrapper[4895]: I1206 10:11:56.677761 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 10:11:57 crc kubenswrapper[4895]: I1206 10:11:57.165755 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 10:11:57 crc kubenswrapper[4895]: W1206 10:11:57.174234 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eaee9a4_ba6d_4285_823c_f90a59785cc6.slice/crio-57b63843ba5bfd610441596889229d276f42c6f56bf3af0cf3ba765042ad4f40 WatchSource:0}: Error finding container 57b63843ba5bfd610441596889229d276f42c6f56bf3af0cf3ba765042ad4f40: Status 404 returned error can't find the container with id 57b63843ba5bfd610441596889229d276f42c6f56bf3af0cf3ba765042ad4f40 Dec 06 10:11:57 crc kubenswrapper[4895]: I1206 10:11:57.177210 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:11:57 crc kubenswrapper[4895]: I1206 10:11:57.526700 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6eaee9a4-ba6d-4285-823c-f90a59785cc6","Type":"ContainerStarted","Data":"57b63843ba5bfd610441596889229d276f42c6f56bf3af0cf3ba765042ad4f40"} Dec 06 10:12:05 crc kubenswrapper[4895]: I1206 10:12:05.051294 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:12:05 crc kubenswrapper[4895]: E1206 10:12:05.052207 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:12:20 crc kubenswrapper[4895]: I1206 10:12:20.051452 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:12:20 crc kubenswrapper[4895]: E1206 10:12:20.052341 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:12:33 crc kubenswrapper[4895]: I1206 10:12:33.052555 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:12:33 crc kubenswrapper[4895]: E1206 10:12:33.053243 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:12:44 crc kubenswrapper[4895]: I1206 10:12:44.050632 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:12:44 crc kubenswrapper[4895]: E1206 10:12:44.051588 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:12:45 crc kubenswrapper[4895]: E1206 10:12:45.928813 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Dec 06 10:12:45 crc kubenswrapper[4895]: E1206 10:12:45.930909 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Dec 06 10:12:45 crc kubenswrapper[4895]: E1206 10:12:45.931498 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpdqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6eaee9a4-ba6d-4285-823c-f90a59785cc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 10:12:45 crc kubenswrapper[4895]: E1206 10:12:45.933522 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6eaee9a4-ba6d-4285-823c-f90a59785cc6" Dec 06 10:12:46 crc kubenswrapper[4895]: E1206 10:12:46.165950 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6eaee9a4-ba6d-4285-823c-f90a59785cc6" Dec 06 10:12:57 crc kubenswrapper[4895]: I1206 10:12:57.050875 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:12:57 crc kubenswrapper[4895]: E1206 10:12:57.051749 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:12:59 crc kubenswrapper[4895]: I1206 10:12:59.301245 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 10:13:01 crc kubenswrapper[4895]: I1206 10:13:01.339267 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6eaee9a4-ba6d-4285-823c-f90a59785cc6","Type":"ContainerStarted","Data":"75eec5b42e0a02465dfc49095bda469e004ee898e1e4e2068e61e95002ca0cfd"} Dec 06 10:13:01 crc kubenswrapper[4895]: I1206 10:13:01.361613 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.241814117 podStartE2EDuration="1m6.361585683s" podCreationTimestamp="2025-12-06 10:11:55 +0000 UTC" firstStartedPulling="2025-12-06 10:11:57.176906554 +0000 UTC m=+11679.578295424" lastFinishedPulling="2025-12-06 10:12:59.29667811 +0000 UTC m=+11741.698066990" observedRunningTime="2025-12-06 10:13:01.358441758 +0000 UTC m=+11743.759830628" watchObservedRunningTime="2025-12-06 10:13:01.361585683 +0000 UTC m=+11743.762974553" Dec 06 10:13:11 crc kubenswrapper[4895]: I1206 10:13:11.051462 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:13:11 crc kubenswrapper[4895]: E1206 10:13:11.052625 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:13:24 crc kubenswrapper[4895]: I1206 10:13:24.051225 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:13:24 crc kubenswrapper[4895]: E1206 10:13:24.051998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.221567 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7qxds"] Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.225270 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.245014 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7qxds"] Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.320068 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-utilities\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.320137 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdc2c\" (UniqueName: \"kubernetes.io/projected/bdc549cf-6d59-4535-8767-e7cd0808d9c1-kube-api-access-tdc2c\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.320227 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-catalog-content\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.423587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-catalog-content\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.423773 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-utilities\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.423831 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdc2c\" (UniqueName: \"kubernetes.io/projected/bdc549cf-6d59-4535-8767-e7cd0808d9c1-kube-api-access-tdc2c\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.424885 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-catalog-content\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.425134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-utilities\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.460339 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdc2c\" (UniqueName: \"kubernetes.io/projected/bdc549cf-6d59-4535-8767-e7cd0808d9c1-kube-api-access-tdc2c\") pod \"community-operators-7qxds\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:29 crc kubenswrapper[4895]: I1206 10:13:29.551172 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:30 crc kubenswrapper[4895]: I1206 10:13:30.344311 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7qxds"] Dec 06 10:13:30 crc kubenswrapper[4895]: I1206 10:13:30.677764 4895 generic.go:334] "Generic (PLEG): container finished" podID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerID="9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0" exitCode=0 Dec 06 10:13:30 crc kubenswrapper[4895]: I1206 10:13:30.677850 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerDied","Data":"9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0"} Dec 06 10:13:30 crc kubenswrapper[4895]: I1206 10:13:30.678198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerStarted","Data":"edd841eeaf9c5e230884dcbdc26105b821e92872bda31e775c61591c4cf0f08a"} Dec 06 10:13:31 crc kubenswrapper[4895]: I1206 10:13:31.691279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerStarted","Data":"bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee"} Dec 06 10:13:33 crc kubenswrapper[4895]: I1206 10:13:33.710488 4895 generic.go:334] "Generic (PLEG): container finished" podID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerID="bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee" exitCode=0 Dec 06 10:13:33 crc kubenswrapper[4895]: I1206 10:13:33.711117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerDied","Data":"bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee"} Dec 06 10:13:34 crc kubenswrapper[4895]: I1206 10:13:34.720692 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerStarted","Data":"f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649"} Dec 06 10:13:38 crc kubenswrapper[4895]: I1206 10:13:38.063245 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:13:38 crc kubenswrapper[4895]: E1206 10:13:38.064069 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:13:39 crc kubenswrapper[4895]: I1206 10:13:39.552147 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:39 crc kubenswrapper[4895]: I1206 10:13:39.553589 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:39 crc kubenswrapper[4895]: I1206 10:13:39.643990 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:39 crc kubenswrapper[4895]: I1206 10:13:39.663171 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7qxds" podStartSLOduration=7.229341747 podStartE2EDuration="10.663154371s" podCreationTimestamp="2025-12-06 10:13:29 +0000 UTC" firstStartedPulling="2025-12-06 10:13:30.679543969 +0000 UTC m=+11773.080932839" lastFinishedPulling="2025-12-06 10:13:34.113356593 +0000 UTC m=+11776.514745463" observedRunningTime="2025-12-06 10:13:34.751260259 +0000 UTC m=+11777.152649149" watchObservedRunningTime="2025-12-06 10:13:39.663154371 +0000 UTC m=+11782.064543241" Dec 06 10:13:39 crc kubenswrapper[4895]: I1206 10:13:39.829346 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.214316 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7qxds"] Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.215093 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7qxds" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="registry-server" containerID="cri-o://f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649" gracePeriod=2 Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.804749 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.835379 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-utilities\") pod \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.835427 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdc2c\" (UniqueName: \"kubernetes.io/projected/bdc549cf-6d59-4535-8767-e7cd0808d9c1-kube-api-access-tdc2c\") pod \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.835665 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-catalog-content\") pod \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\" (UID: \"bdc549cf-6d59-4535-8767-e7cd0808d9c1\") " Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.836163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-utilities" (OuterVolumeSpecName: "utilities") pod "bdc549cf-6d59-4535-8767-e7cd0808d9c1" (UID: "bdc549cf-6d59-4535-8767-e7cd0808d9c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.836670 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.842650 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc549cf-6d59-4535-8767-e7cd0808d9c1-kube-api-access-tdc2c" (OuterVolumeSpecName: "kube-api-access-tdc2c") pod "bdc549cf-6d59-4535-8767-e7cd0808d9c1" (UID: "bdc549cf-6d59-4535-8767-e7cd0808d9c1"). InnerVolumeSpecName "kube-api-access-tdc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.852766 4895 generic.go:334] "Generic (PLEG): container finished" podID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerID="f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649" exitCode=0 Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.852817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerDied","Data":"f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649"} Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.852849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qxds" event={"ID":"bdc549cf-6d59-4535-8767-e7cd0808d9c1","Type":"ContainerDied","Data":"edd841eeaf9c5e230884dcbdc26105b821e92872bda31e775c61591c4cf0f08a"} Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.852871 4895 scope.go:117] "RemoveContainer" containerID="f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.853067 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qxds" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.896746 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc549cf-6d59-4535-8767-e7cd0808d9c1" (UID: "bdc549cf-6d59-4535-8767-e7cd0808d9c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.913392 4895 scope.go:117] "RemoveContainer" containerID="bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.940127 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdc2c\" (UniqueName: \"kubernetes.io/projected/bdc549cf-6d59-4535-8767-e7cd0808d9c1-kube-api-access-tdc2c\") on node \"crc\" DevicePath \"\"" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.940154 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc549cf-6d59-4535-8767-e7cd0808d9c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.953716 4895 scope.go:117] "RemoveContainer" containerID="9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.993050 4895 scope.go:117] "RemoveContainer" containerID="f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649" Dec 06 10:13:43 crc kubenswrapper[4895]: E1206 10:13:43.996926 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649\": container with ID starting with f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649 not found: ID does not exist" containerID="f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.997031 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649"} err="failed to get container status \"f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649\": rpc error: code = NotFound desc = could not find container \"f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649\": container with ID starting with f2a0955a17545d97741ca794379f0be86db87da5781506e3fd6714051e033649 not found: ID does not exist" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.997062 4895 scope.go:117] "RemoveContainer" containerID="bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee" Dec 06 10:13:43 crc kubenswrapper[4895]: E1206 10:13:43.997525 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee\": container with ID starting with bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee not found: ID does not exist" containerID="bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.997601 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee"} err="failed to get container status \"bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee\": rpc error: code = NotFound desc = could not find container \"bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee\": container with ID starting with bb753067d21978acedb1296d040150e857ae7a8e4f1054abcf4aeea0108e59ee not found: ID does not exist" Dec 06 10:13:43 crc kubenswrapper[4895]: I1206 10:13:43.997637 4895 scope.go:117] "RemoveContainer" containerID="9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0" Dec 06 10:13:44 crc kubenswrapper[4895]: E1206 10:13:43.998011 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0\": container with ID starting with 9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0 not found: ID does not exist" containerID="9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0" Dec 06 10:13:44 crc kubenswrapper[4895]: I1206 10:13:43.998065 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0"} err="failed to get container status \"9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0\": rpc error: code = NotFound desc = could not find container \"9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0\": container with ID starting with 9ea423815b6a559f91eb78c86c219a16718a3b97c684cf440d02f09a032a77d0 not found: ID does not exist" Dec 06 10:13:44 crc kubenswrapper[4895]: I1206 10:13:44.186902 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7qxds"] Dec 06 10:13:44 crc kubenswrapper[4895]: I1206 10:13:44.198865 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7qxds"] Dec 06 10:13:46 crc kubenswrapper[4895]: I1206 10:13:46.064111 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" path="/var/lib/kubelet/pods/bdc549cf-6d59-4535-8767-e7cd0808d9c1/volumes" Dec 06 10:13:53 crc kubenswrapper[4895]: I1206 10:13:53.051706 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:13:53 crc kubenswrapper[4895]: E1206 10:13:53.052583 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.765277 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4s52w"] Dec 06 10:14:03 crc kubenswrapper[4895]: E1206 10:14:03.766270 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="extract-content" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.766294 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="extract-content" Dec 06 10:14:03 crc kubenswrapper[4895]: E1206 10:14:03.766316 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="extract-utilities" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.766325 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="extract-utilities" Dec 06 10:14:03 crc kubenswrapper[4895]: E1206 10:14:03.766342 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="registry-server" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.766348 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="registry-server" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.766585 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc549cf-6d59-4535-8767-e7cd0808d9c1" containerName="registry-server" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.780627 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.783633 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s52w"] Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.821270 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-utilities\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.821327 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-catalog-content\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.821629 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshfx\" (UniqueName: \"kubernetes.io/projected/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-kube-api-access-fshfx\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.924431 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fshfx\" (UniqueName: \"kubernetes.io/projected/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-kube-api-access-fshfx\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.924632 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-utilities\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.924665 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-catalog-content\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.925275 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-utilities\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.925303 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-catalog-content\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:03 crc kubenswrapper[4895]: I1206 10:14:03.941812 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fshfx\" (UniqueName: \"kubernetes.io/projected/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-kube-api-access-fshfx\") pod \"redhat-operators-4s52w\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:04 crc kubenswrapper[4895]: I1206 10:14:04.140926 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:04 crc kubenswrapper[4895]: I1206 10:14:04.641734 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s52w"] Dec 06 10:14:05 crc kubenswrapper[4895]: I1206 10:14:05.075069 4895 generic.go:334] "Generic (PLEG): container finished" podID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerID="ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56" exitCode=0 Dec 06 10:14:05 crc kubenswrapper[4895]: I1206 10:14:05.075129 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerDied","Data":"ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56"} Dec 06 10:14:05 crc kubenswrapper[4895]: I1206 10:14:05.075391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerStarted","Data":"a1f16e66c0e0063acfec748c6834e7bfa9a94c44a312be196e169f11877fc58e"} Dec 06 10:14:06 crc kubenswrapper[4895]: I1206 10:14:06.058826 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:14:06 crc kubenswrapper[4895]: E1206 10:14:06.059515 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:14:06 crc kubenswrapper[4895]: I1206 10:14:06.089298 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerStarted","Data":"a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03"} Dec 06 10:14:10 crc kubenswrapper[4895]: I1206 10:14:10.142219 4895 generic.go:334] "Generic (PLEG): container finished" podID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerID="a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03" exitCode=0 Dec 06 10:14:10 crc kubenswrapper[4895]: I1206 10:14:10.142331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerDied","Data":"a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03"} Dec 06 10:14:11 crc kubenswrapper[4895]: I1206 10:14:11.155757 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerStarted","Data":"331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6"} Dec 06 10:14:11 crc kubenswrapper[4895]: I1206 10:14:11.189940 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4s52w" podStartSLOduration=2.6641276229999997 podStartE2EDuration="8.189917422s" podCreationTimestamp="2025-12-06 10:14:03 +0000 UTC" firstStartedPulling="2025-12-06 10:14:05.076633367 +0000 UTC m=+11807.478022237" lastFinishedPulling="2025-12-06 10:14:10.602423156 +0000 UTC m=+11813.003812036" observedRunningTime="2025-12-06 10:14:11.178757491 +0000 UTC m=+11813.580146361" watchObservedRunningTime="2025-12-06 10:14:11.189917422 +0000 UTC m=+11813.591306292" Dec 06 10:14:14 crc kubenswrapper[4895]: I1206 10:14:14.142426 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:14 crc kubenswrapper[4895]: I1206 10:14:14.142770 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:15 crc kubenswrapper[4895]: I1206 10:14:15.197448 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4s52w" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="registry-server" probeResult="failure" output=< Dec 06 10:14:15 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 10:14:15 crc kubenswrapper[4895]: > Dec 06 10:14:18 crc kubenswrapper[4895]: I1206 10:14:18.065726 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:14:18 crc kubenswrapper[4895]: E1206 10:14:18.066772 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:14:24 crc kubenswrapper[4895]: I1206 10:14:24.191633 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:24 crc kubenswrapper[4895]: I1206 10:14:24.240948 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:24 crc kubenswrapper[4895]: I1206 10:14:24.426668 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s52w"] Dec 06 10:14:25 crc kubenswrapper[4895]: I1206 10:14:25.297246 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4s52w" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="registry-server" containerID="cri-o://331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6" gracePeriod=2 Dec 06 10:14:25 crc kubenswrapper[4895]: I1206 10:14:25.953831 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.041207 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-catalog-content\") pod \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.041513 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fshfx\" (UniqueName: \"kubernetes.io/projected/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-kube-api-access-fshfx\") pod \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.041592 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-utilities\") pod \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\" (UID: \"d09cfe91-e134-42b9-8ec0-d593e2c45ffa\") " Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.042235 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-utilities" (OuterVolumeSpecName: "utilities") pod "d09cfe91-e134-42b9-8ec0-d593e2c45ffa" (UID: "d09cfe91-e134-42b9-8ec0-d593e2c45ffa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.050856 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-kube-api-access-fshfx" (OuterVolumeSpecName: "kube-api-access-fshfx") pod "d09cfe91-e134-42b9-8ec0-d593e2c45ffa" (UID: "d09cfe91-e134-42b9-8ec0-d593e2c45ffa"). InnerVolumeSpecName "kube-api-access-fshfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.141198 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d09cfe91-e134-42b9-8ec0-d593e2c45ffa" (UID: "d09cfe91-e134-42b9-8ec0-d593e2c45ffa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.143860 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fshfx\" (UniqueName: \"kubernetes.io/projected/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-kube-api-access-fshfx\") on node \"crc\" DevicePath \"\"" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.143890 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.143899 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09cfe91-e134-42b9-8ec0-d593e2c45ffa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.308575 4895 generic.go:334] "Generic (PLEG): container finished" podID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerID="331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6" exitCode=0 Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.308624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerDied","Data":"331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6"} Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.308652 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s52w" event={"ID":"d09cfe91-e134-42b9-8ec0-d593e2c45ffa","Type":"ContainerDied","Data":"a1f16e66c0e0063acfec748c6834e7bfa9a94c44a312be196e169f11877fc58e"} Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.308669 4895 scope.go:117] "RemoveContainer" containerID="331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.309799 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s52w" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.373420 4895 scope.go:117] "RemoveContainer" containerID="a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.380310 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s52w"] Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.398597 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4s52w"] Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.409678 4895 scope.go:117] "RemoveContainer" containerID="ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.446749 4895 scope.go:117] "RemoveContainer" containerID="331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6" Dec 06 10:14:26 crc kubenswrapper[4895]: E1206 10:14:26.449935 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6\": container with ID starting with 331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6 not found: ID does not exist" containerID="331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.449984 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6"} err="failed to get container status \"331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6\": rpc error: code = NotFound desc = could not find container \"331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6\": container with ID starting with 331acf91e8d7c75cdfaf3f0e8002d6b369dadfc574389938f6c167d3d1a1e9b6 not found: ID does not exist" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.450012 4895 scope.go:117] "RemoveContainer" containerID="a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03" Dec 06 10:14:26 crc kubenswrapper[4895]: E1206 10:14:26.450783 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03\": container with ID starting with a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03 not found: ID does not exist" containerID="a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.450881 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03"} err="failed to get container status \"a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03\": rpc error: code = NotFound desc = could not find container \"a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03\": container with ID starting with a77f60066ffdf913d89352a454bd4426bd11cc9377f84deea62618073ca25e03 not found: ID does not exist" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.450975 4895 scope.go:117] "RemoveContainer" containerID="ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56" Dec 06 10:14:26 crc kubenswrapper[4895]: E1206 10:14:26.451576 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56\": container with ID starting with ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56 not found: ID does not exist" containerID="ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56" Dec 06 10:14:26 crc kubenswrapper[4895]: I1206 10:14:26.451604 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56"} err="failed to get container status \"ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56\": rpc error: code = NotFound desc = could not find container \"ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56\": container with ID starting with ba08f09217598ebf62a43ecb2a0f6919ac71e70901bc711d762d6ab1354eec56 not found: ID does not exist" Dec 06 10:14:28 crc kubenswrapper[4895]: I1206 10:14:28.066853 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" path="/var/lib/kubelet/pods/d09cfe91-e134-42b9-8ec0-d593e2c45ffa/volumes" Dec 06 10:14:29 crc kubenswrapper[4895]: I1206 10:14:29.051630 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:14:29 crc kubenswrapper[4895]: E1206 10:14:29.052379 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:14:40 crc kubenswrapper[4895]: I1206 10:14:40.051570 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:14:40 crc kubenswrapper[4895]: E1206 10:14:40.052266 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:14:52 crc kubenswrapper[4895]: I1206 10:14:52.051017 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:14:52 crc kubenswrapper[4895]: E1206 10:14:52.051860 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.171415 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz"] Dec 06 10:15:00 crc kubenswrapper[4895]: E1206 10:15:00.172632 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="extract-content" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.172675 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="extract-content" Dec 06 10:15:00 crc kubenswrapper[4895]: E1206 10:15:00.172690 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="registry-server" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.172698 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="registry-server" Dec 06 10:15:00 crc kubenswrapper[4895]: E1206 10:15:00.172748 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="extract-utilities" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.172759 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="extract-utilities" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.173045 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09cfe91-e134-42b9-8ec0-d593e2c45ffa" containerName="registry-server" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.174045 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.177776 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.178067 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.191054 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz"] Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.211876 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqf5j\" (UniqueName: \"kubernetes.io/projected/bca7976e-49dd-49d4-a795-fb6c1dd7edda-kube-api-access-fqf5j\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.212127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bca7976e-49dd-49d4-a795-fb6c1dd7edda-secret-volume\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.212276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bca7976e-49dd-49d4-a795-fb6c1dd7edda-config-volume\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.314059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bca7976e-49dd-49d4-a795-fb6c1dd7edda-config-volume\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.314122 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqf5j\" (UniqueName: \"kubernetes.io/projected/bca7976e-49dd-49d4-a795-fb6c1dd7edda-kube-api-access-fqf5j\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.314212 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bca7976e-49dd-49d4-a795-fb6c1dd7edda-secret-volume\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.315433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bca7976e-49dd-49d4-a795-fb6c1dd7edda-config-volume\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.319142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bca7976e-49dd-49d4-a795-fb6c1dd7edda-secret-volume\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.332745 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqf5j\" (UniqueName: \"kubernetes.io/projected/bca7976e-49dd-49d4-a795-fb6c1dd7edda-kube-api-access-fqf5j\") pod \"collect-profiles-29416935-2l5sz\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.500963 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:00 crc kubenswrapper[4895]: I1206 10:15:00.983640 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz"] Dec 06 10:15:01 crc kubenswrapper[4895]: I1206 10:15:01.732781 4895 generic.go:334] "Generic (PLEG): container finished" podID="bca7976e-49dd-49d4-a795-fb6c1dd7edda" containerID="90bdadb61a6aeb7f25d70b06773eef8b325dc14f3b5a7e9834c224557a0c8c5c" exitCode=0 Dec 06 10:15:01 crc kubenswrapper[4895]: I1206 10:15:01.732850 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" event={"ID":"bca7976e-49dd-49d4-a795-fb6c1dd7edda","Type":"ContainerDied","Data":"90bdadb61a6aeb7f25d70b06773eef8b325dc14f3b5a7e9834c224557a0c8c5c"} Dec 06 10:15:01 crc kubenswrapper[4895]: I1206 10:15:01.733279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" event={"ID":"bca7976e-49dd-49d4-a795-fb6c1dd7edda","Type":"ContainerStarted","Data":"36b364f7d63c332568c39ed675b8841197f3da35f64de4b5bb277a97c1a17383"} Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.257114 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.430760 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqf5j\" (UniqueName: \"kubernetes.io/projected/bca7976e-49dd-49d4-a795-fb6c1dd7edda-kube-api-access-fqf5j\") pod \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.430926 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bca7976e-49dd-49d4-a795-fb6c1dd7edda-config-volume\") pod \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.432374 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca7976e-49dd-49d4-a795-fb6c1dd7edda-config-volume" (OuterVolumeSpecName: "config-volume") pod "bca7976e-49dd-49d4-a795-fb6c1dd7edda" (UID: "bca7976e-49dd-49d4-a795-fb6c1dd7edda"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.438926 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca7976e-49dd-49d4-a795-fb6c1dd7edda-kube-api-access-fqf5j" (OuterVolumeSpecName: "kube-api-access-fqf5j") pod "bca7976e-49dd-49d4-a795-fb6c1dd7edda" (UID: "bca7976e-49dd-49d4-a795-fb6c1dd7edda"). InnerVolumeSpecName "kube-api-access-fqf5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.537416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bca7976e-49dd-49d4-a795-fb6c1dd7edda-secret-volume\") pod \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\" (UID: \"bca7976e-49dd-49d4-a795-fb6c1dd7edda\") " Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.538041 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bca7976e-49dd-49d4-a795-fb6c1dd7edda-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.538063 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqf5j\" (UniqueName: \"kubernetes.io/projected/bca7976e-49dd-49d4-a795-fb6c1dd7edda-kube-api-access-fqf5j\") on node \"crc\" DevicePath \"\"" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.543444 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca7976e-49dd-49d4-a795-fb6c1dd7edda-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bca7976e-49dd-49d4-a795-fb6c1dd7edda" (UID: "bca7976e-49dd-49d4-a795-fb6c1dd7edda"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.640504 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bca7976e-49dd-49d4-a795-fb6c1dd7edda-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.765502 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" event={"ID":"bca7976e-49dd-49d4-a795-fb6c1dd7edda","Type":"ContainerDied","Data":"36b364f7d63c332568c39ed675b8841197f3da35f64de4b5bb277a97c1a17383"} Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.765546 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b364f7d63c332568c39ed675b8841197f3da35f64de4b5bb277a97c1a17383" Dec 06 10:15:03 crc kubenswrapper[4895]: I1206 10:15:03.765558 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416935-2l5sz" Dec 06 10:15:04 crc kubenswrapper[4895]: I1206 10:15:04.051267 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:15:04 crc kubenswrapper[4895]: E1206 10:15:04.052011 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:15:04 crc kubenswrapper[4895]: I1206 10:15:04.412920 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w"] Dec 06 10:15:04 crc kubenswrapper[4895]: I1206 10:15:04.425630 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416890-qfr9w"] Dec 06 10:15:06 crc kubenswrapper[4895]: I1206 10:15:06.062924 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de51898-9fcb-4640-8e5b-710a2d1588e5" path="/var/lib/kubelet/pods/3de51898-9fcb-4640-8e5b-710a2d1588e5/volumes" Dec 06 10:15:18 crc kubenswrapper[4895]: I1206 10:15:18.059103 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:15:18 crc kubenswrapper[4895]: E1206 10:15:18.059846 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:15:22 crc kubenswrapper[4895]: I1206 10:15:22.650992 4895 scope.go:117] "RemoveContainer" containerID="734d6663f4ecbba1eb1830cbd25747bad816f76a30561f763406f500d68165f9" Dec 06 10:15:33 crc kubenswrapper[4895]: I1206 10:15:33.051541 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:15:34 crc kubenswrapper[4895]: I1206 10:15:34.216681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"31eff5f73ad3afdc32c8f687852778328cd0e3ddb549cd8a0e879e19c4d0ee4f"} Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.476767 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xslfp"] Dec 06 10:17:37 crc kubenswrapper[4895]: E1206 10:17:37.477825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca7976e-49dd-49d4-a795-fb6c1dd7edda" containerName="collect-profiles" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.477842 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca7976e-49dd-49d4-a795-fb6c1dd7edda" containerName="collect-profiles" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.478062 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca7976e-49dd-49d4-a795-fb6c1dd7edda" containerName="collect-profiles" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.479549 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.507458 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xslfp"] Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.541010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-utilities\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.541230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkfbw\" (UniqueName: \"kubernetes.io/projected/79b7557a-e175-42fb-b90c-3cb938bcfd91-kube-api-access-fkfbw\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.541402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-catalog-content\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.643146 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-catalog-content\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.643505 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-utilities\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.643732 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-catalog-content\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.643884 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-utilities\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.643896 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkfbw\" (UniqueName: \"kubernetes.io/projected/79b7557a-e175-42fb-b90c-3cb938bcfd91-kube-api-access-fkfbw\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.665811 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkfbw\" (UniqueName: \"kubernetes.io/projected/79b7557a-e175-42fb-b90c-3cb938bcfd91-kube-api-access-fkfbw\") pod \"redhat-marketplace-xslfp\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:37 crc kubenswrapper[4895]: I1206 10:17:37.853740 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:38 crc kubenswrapper[4895]: I1206 10:17:38.391993 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xslfp"] Dec 06 10:17:38 crc kubenswrapper[4895]: I1206 10:17:38.649534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerStarted","Data":"a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731"} Dec 06 10:17:38 crc kubenswrapper[4895]: I1206 10:17:38.649865 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerStarted","Data":"70bea25748080de713c78fb461869414b4dcbf0f1ab7cced6acad9c388bc6a80"} Dec 06 10:17:38 crc kubenswrapper[4895]: I1206 10:17:38.653084 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:17:39 crc kubenswrapper[4895]: I1206 10:17:39.678028 4895 generic.go:334] "Generic (PLEG): container finished" podID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerID="a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731" exitCode=0 Dec 06 10:17:39 crc kubenswrapper[4895]: I1206 10:17:39.679139 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerDied","Data":"a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731"} Dec 06 10:17:39 crc kubenswrapper[4895]: I1206 10:17:39.679197 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerStarted","Data":"b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c"} Dec 06 10:17:40 crc kubenswrapper[4895]: I1206 10:17:40.690988 4895 generic.go:334] "Generic (PLEG): container finished" podID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerID="b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c" exitCode=0 Dec 06 10:17:40 crc kubenswrapper[4895]: I1206 10:17:40.691054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerDied","Data":"b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c"} Dec 06 10:17:41 crc kubenswrapper[4895]: I1206 10:17:41.704659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerStarted","Data":"e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2"} Dec 06 10:17:41 crc kubenswrapper[4895]: I1206 10:17:41.722091 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xslfp" podStartSLOduration=2.275389432 podStartE2EDuration="4.722047852s" podCreationTimestamp="2025-12-06 10:17:37 +0000 UTC" firstStartedPulling="2025-12-06 10:17:38.652630136 +0000 UTC m=+12021.054019026" lastFinishedPulling="2025-12-06 10:17:41.099288536 +0000 UTC m=+12023.500677446" observedRunningTime="2025-12-06 10:17:41.719558735 +0000 UTC m=+12024.120947605" watchObservedRunningTime="2025-12-06 10:17:41.722047852 +0000 UTC m=+12024.123436722" Dec 06 10:17:47 crc kubenswrapper[4895]: I1206 10:17:47.854455 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:47 crc kubenswrapper[4895]: I1206 10:17:47.854926 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:47 crc kubenswrapper[4895]: I1206 10:17:47.944380 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:48 crc kubenswrapper[4895]: I1206 10:17:48.889829 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:51 crc kubenswrapper[4895]: I1206 10:17:51.004043 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xslfp"] Dec 06 10:17:51 crc kubenswrapper[4895]: I1206 10:17:51.004773 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xslfp" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="registry-server" containerID="cri-o://e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2" gracePeriod=2 Dec 06 10:17:51 crc kubenswrapper[4895]: E1206 10:17:51.228968 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79b7557a_e175_42fb_b90c_3cb938bcfd91.slice/crio-conmon-e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79b7557a_e175_42fb_b90c_3cb938bcfd91.slice/crio-e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2.scope\": RecentStats: unable to find data in memory cache]" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.823298 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.834717 4895 generic.go:334] "Generic (PLEG): container finished" podID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerID="e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2" exitCode=0 Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.835077 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerDied","Data":"e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2"} Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.835105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xslfp" event={"ID":"79b7557a-e175-42fb-b90c-3cb938bcfd91","Type":"ContainerDied","Data":"70bea25748080de713c78fb461869414b4dcbf0f1ab7cced6acad9c388bc6a80"} Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.835123 4895 scope.go:117] "RemoveContainer" containerID="e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.835305 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xslfp" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.865881 4895 scope.go:117] "RemoveContainer" containerID="b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.894304 4895 scope.go:117] "RemoveContainer" containerID="a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.940401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-utilities\") pod \"79b7557a-e175-42fb-b90c-3cb938bcfd91\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.940544 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-catalog-content\") pod \"79b7557a-e175-42fb-b90c-3cb938bcfd91\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.940586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkfbw\" (UniqueName: \"kubernetes.io/projected/79b7557a-e175-42fb-b90c-3cb938bcfd91-kube-api-access-fkfbw\") pod \"79b7557a-e175-42fb-b90c-3cb938bcfd91\" (UID: \"79b7557a-e175-42fb-b90c-3cb938bcfd91\") " Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.942110 4895 scope.go:117] "RemoveContainer" containerID="e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.942747 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-utilities" (OuterVolumeSpecName: "utilities") pod "79b7557a-e175-42fb-b90c-3cb938bcfd91" (UID: "79b7557a-e175-42fb-b90c-3cb938bcfd91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:17:52 crc kubenswrapper[4895]: E1206 10:17:51.948607 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2\": container with ID starting with e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2 not found: ID does not exist" containerID="e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.948678 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2"} err="failed to get container status \"e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2\": rpc error: code = NotFound desc = could not find container \"e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2\": container with ID starting with e7dc5af6f173b1e3eb0c0eea937ba27bbe16d74bbce28c8ec238497e12b2c5d2 not found: ID does not exist" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.948707 4895 scope.go:117] "RemoveContainer" containerID="b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.948727 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b7557a-e175-42fb-b90c-3cb938bcfd91-kube-api-access-fkfbw" (OuterVolumeSpecName: "kube-api-access-fkfbw") pod "79b7557a-e175-42fb-b90c-3cb938bcfd91" (UID: "79b7557a-e175-42fb-b90c-3cb938bcfd91"). InnerVolumeSpecName "kube-api-access-fkfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:17:52 crc kubenswrapper[4895]: E1206 10:17:51.949002 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c\": container with ID starting with b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c not found: ID does not exist" containerID="b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.949025 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c"} err="failed to get container status \"b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c\": rpc error: code = NotFound desc = could not find container \"b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c\": container with ID starting with b84503f0b854929ca1358586253aba2e1a9af15b3f809dff8f1f6507834b232c not found: ID does not exist" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.949039 4895 scope.go:117] "RemoveContainer" containerID="a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731" Dec 06 10:17:52 crc kubenswrapper[4895]: E1206 10:17:51.955618 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731\": container with ID starting with a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731 not found: ID does not exist" containerID="a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.955661 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731"} err="failed to get container status \"a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731\": rpc error: code = NotFound desc = could not find container \"a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731\": container with ID starting with a7fca623f4d4e96a477ae8f53f161957bfb0d0eb6ef0ef1c20342ab732d93731 not found: ID does not exist" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:51.962405 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79b7557a-e175-42fb-b90c-3cb938bcfd91" (UID: "79b7557a-e175-42fb-b90c-3cb938bcfd91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:52.043453 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:52.043494 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkfbw\" (UniqueName: \"kubernetes.io/projected/79b7557a-e175-42fb-b90c-3cb938bcfd91-kube-api-access-fkfbw\") on node \"crc\" DevicePath \"\"" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:52.043564 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b7557a-e175-42fb-b90c-3cb938bcfd91-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:52.180747 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xslfp"] Dec 06 10:17:52 crc kubenswrapper[4895]: I1206 10:17:52.191518 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xslfp"] Dec 06 10:17:54 crc kubenswrapper[4895]: I1206 10:17:54.067012 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" path="/var/lib/kubelet/pods/79b7557a-e175-42fb-b90c-3cb938bcfd91/volumes" Dec 06 10:17:59 crc kubenswrapper[4895]: I1206 10:17:59.695907 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:17:59 crc kubenswrapper[4895]: I1206 10:17:59.696704 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:18:29 crc kubenswrapper[4895]: I1206 10:18:29.696134 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:18:29 crc kubenswrapper[4895]: I1206 10:18:29.696928 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:18:59 crc kubenswrapper[4895]: I1206 10:18:59.696203 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:18:59 crc kubenswrapper[4895]: I1206 10:18:59.696927 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:18:59 crc kubenswrapper[4895]: I1206 10:18:59.696988 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:18:59 crc kubenswrapper[4895]: I1206 10:18:59.697954 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31eff5f73ad3afdc32c8f687852778328cd0e3ddb549cd8a0e879e19c4d0ee4f"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:18:59 crc kubenswrapper[4895]: I1206 10:18:59.698012 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://31eff5f73ad3afdc32c8f687852778328cd0e3ddb549cd8a0e879e19c4d0ee4f" gracePeriod=600 Dec 06 10:19:00 crc kubenswrapper[4895]: I1206 10:19:00.612821 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="31eff5f73ad3afdc32c8f687852778328cd0e3ddb549cd8a0e879e19c4d0ee4f" exitCode=0 Dec 06 10:19:00 crc kubenswrapper[4895]: I1206 10:19:00.612871 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"31eff5f73ad3afdc32c8f687852778328cd0e3ddb549cd8a0e879e19c4d0ee4f"} Dec 06 10:19:00 crc kubenswrapper[4895]: I1206 10:19:00.613338 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c"} Dec 06 10:19:00 crc kubenswrapper[4895]: I1206 10:19:00.613363 4895 scope.go:117] "RemoveContainer" containerID="09b03cf1891b076f13a0fa7b4a18c515381bcc64e92a5c1ca767ae60e7af38e4" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.148204 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c8bvz"] Dec 06 10:20:24 crc kubenswrapper[4895]: E1206 10:20:24.149264 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="extract-content" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.149279 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="extract-content" Dec 06 10:20:24 crc kubenswrapper[4895]: E1206 10:20:24.149301 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="extract-utilities" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.149307 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="extract-utilities" Dec 06 10:20:24 crc kubenswrapper[4895]: E1206 10:20:24.149331 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="registry-server" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.149338 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="registry-server" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.149582 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b7557a-e175-42fb-b90c-3cb938bcfd91" containerName="registry-server" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.151251 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.168835 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8bvz"] Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.299718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-catalog-content\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.300016 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p795\" (UniqueName: \"kubernetes.io/projected/78edeb7b-e89e-426f-a615-22c3d31ee0f6-kube-api-access-4p795\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.300275 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-utilities\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.402379 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p795\" (UniqueName: \"kubernetes.io/projected/78edeb7b-e89e-426f-a615-22c3d31ee0f6-kube-api-access-4p795\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.402842 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-utilities\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.402980 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-catalog-content\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.403539 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-utilities\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.403616 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-catalog-content\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.438230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p795\" (UniqueName: \"kubernetes.io/projected/78edeb7b-e89e-426f-a615-22c3d31ee0f6-kube-api-access-4p795\") pod \"certified-operators-c8bvz\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:24 crc kubenswrapper[4895]: I1206 10:20:24.529574 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:25 crc kubenswrapper[4895]: I1206 10:20:25.305462 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8bvz"] Dec 06 10:20:25 crc kubenswrapper[4895]: I1206 10:20:25.914118 4895 generic.go:334] "Generic (PLEG): container finished" podID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerID="f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546" exitCode=0 Dec 06 10:20:25 crc kubenswrapper[4895]: I1206 10:20:25.914227 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerDied","Data":"f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546"} Dec 06 10:20:25 crc kubenswrapper[4895]: I1206 10:20:25.914446 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerStarted","Data":"cdcba5dbe6fc6ea4570016771cb342e74dfca653f1721ff3dedd9ccbe6e98586"} Dec 06 10:20:26 crc kubenswrapper[4895]: I1206 10:20:26.929595 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerStarted","Data":"124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89"} Dec 06 10:20:28 crc kubenswrapper[4895]: I1206 10:20:28.953049 4895 generic.go:334] "Generic (PLEG): container finished" podID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerID="124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89" exitCode=0 Dec 06 10:20:28 crc kubenswrapper[4895]: I1206 10:20:28.953155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerDied","Data":"124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89"} Dec 06 10:20:29 crc kubenswrapper[4895]: I1206 10:20:29.970081 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerStarted","Data":"7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7"} Dec 06 10:20:29 crc kubenswrapper[4895]: I1206 10:20:29.999630 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c8bvz" podStartSLOduration=2.5053959900000002 podStartE2EDuration="5.999591773s" podCreationTimestamp="2025-12-06 10:20:24 +0000 UTC" firstStartedPulling="2025-12-06 10:20:25.916362343 +0000 UTC m=+12188.317751233" lastFinishedPulling="2025-12-06 10:20:29.410558146 +0000 UTC m=+12191.811947016" observedRunningTime="2025-12-06 10:20:29.990689564 +0000 UTC m=+12192.392078484" watchObservedRunningTime="2025-12-06 10:20:29.999591773 +0000 UTC m=+12192.400980643" Dec 06 10:20:34 crc kubenswrapper[4895]: I1206 10:20:34.530461 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:34 crc kubenswrapper[4895]: I1206 10:20:34.533786 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:34 crc kubenswrapper[4895]: I1206 10:20:34.616512 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:35 crc kubenswrapper[4895]: I1206 10:20:35.106487 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:35 crc kubenswrapper[4895]: I1206 10:20:35.179531 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8bvz"] Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.049079 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c8bvz" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="registry-server" containerID="cri-o://7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7" gracePeriod=2 Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.675930 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.757298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p795\" (UniqueName: \"kubernetes.io/projected/78edeb7b-e89e-426f-a615-22c3d31ee0f6-kube-api-access-4p795\") pod \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.757574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-utilities\") pod \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.757889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-catalog-content\") pod \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\" (UID: \"78edeb7b-e89e-426f-a615-22c3d31ee0f6\") " Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.759150 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-utilities" (OuterVolumeSpecName: "utilities") pod "78edeb7b-e89e-426f-a615-22c3d31ee0f6" (UID: "78edeb7b-e89e-426f-a615-22c3d31ee0f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.766384 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78edeb7b-e89e-426f-a615-22c3d31ee0f6-kube-api-access-4p795" (OuterVolumeSpecName: "kube-api-access-4p795") pod "78edeb7b-e89e-426f-a615-22c3d31ee0f6" (UID: "78edeb7b-e89e-426f-a615-22c3d31ee0f6"). InnerVolumeSpecName "kube-api-access-4p795". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.812096 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78edeb7b-e89e-426f-a615-22c3d31ee0f6" (UID: "78edeb7b-e89e-426f-a615-22c3d31ee0f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.860563 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.860613 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p795\" (UniqueName: \"kubernetes.io/projected/78edeb7b-e89e-426f-a615-22c3d31ee0f6-kube-api-access-4p795\") on node \"crc\" DevicePath \"\"" Dec 06 10:20:37 crc kubenswrapper[4895]: I1206 10:20:37.860636 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78edeb7b-e89e-426f-a615-22c3d31ee0f6-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.070305 4895 generic.go:334] "Generic (PLEG): container finished" podID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerID="7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7" exitCode=0 Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.070344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerDied","Data":"7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7"} Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.070371 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8bvz" event={"ID":"78edeb7b-e89e-426f-a615-22c3d31ee0f6","Type":"ContainerDied","Data":"cdcba5dbe6fc6ea4570016771cb342e74dfca653f1721ff3dedd9ccbe6e98586"} Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.070400 4895 scope.go:117] "RemoveContainer" containerID="7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.070625 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8bvz" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.107822 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8bvz"] Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.117762 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c8bvz"] Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.129831 4895 scope.go:117] "RemoveContainer" containerID="124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.164223 4895 scope.go:117] "RemoveContainer" containerID="f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.229412 4895 scope.go:117] "RemoveContainer" containerID="7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7" Dec 06 10:20:38 crc kubenswrapper[4895]: E1206 10:20:38.230130 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7\": container with ID starting with 7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7 not found: ID does not exist" containerID="7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.230295 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7"} err="failed to get container status \"7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7\": rpc error: code = NotFound desc = could not find container \"7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7\": container with ID starting with 7e256725bf7bc976712762f28e01c003dd4558e098bdd35197b11290223523c7 not found: ID does not exist" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.230368 4895 scope.go:117] "RemoveContainer" containerID="124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89" Dec 06 10:20:38 crc kubenswrapper[4895]: E1206 10:20:38.230916 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89\": container with ID starting with 124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89 not found: ID does not exist" containerID="124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.231104 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89"} err="failed to get container status \"124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89\": rpc error: code = NotFound desc = could not find container \"124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89\": container with ID starting with 124d9a0fe780d680938ead16d419490fc38243d9ccee4a9ce95b08ac94814a89 not found: ID does not exist" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.231170 4895 scope.go:117] "RemoveContainer" containerID="f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546" Dec 06 10:20:38 crc kubenswrapper[4895]: E1206 10:20:38.231639 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546\": container with ID starting with f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546 not found: ID does not exist" containerID="f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546" Dec 06 10:20:38 crc kubenswrapper[4895]: I1206 10:20:38.231714 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546"} err="failed to get container status \"f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546\": rpc error: code = NotFound desc = could not find container \"f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546\": container with ID starting with f377dcb55326b6c1057fb36dfcf1c49eb136130f5ed393724a0325e2e9653546 not found: ID does not exist" Dec 06 10:20:40 crc kubenswrapper[4895]: I1206 10:20:40.060624 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" path="/var/lib/kubelet/pods/78edeb7b-e89e-426f-a615-22c3d31ee0f6/volumes" Dec 06 10:21:29 crc kubenswrapper[4895]: I1206 10:21:29.696623 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:21:29 crc kubenswrapper[4895]: I1206 10:21:29.697243 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:21:59 crc kubenswrapper[4895]: I1206 10:21:59.695733 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:21:59 crc kubenswrapper[4895]: I1206 10:21:59.696199 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:22:29 crc kubenswrapper[4895]: I1206 10:22:29.695307 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:22:29 crc kubenswrapper[4895]: I1206 10:22:29.696027 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:22:29 crc kubenswrapper[4895]: I1206 10:22:29.696138 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:22:29 crc kubenswrapper[4895]: I1206 10:22:29.697207 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:22:29 crc kubenswrapper[4895]: I1206 10:22:29.697345 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" gracePeriod=600 Dec 06 10:22:29 crc kubenswrapper[4895]: E1206 10:22:29.826589 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:22:30 crc kubenswrapper[4895]: I1206 10:22:30.434742 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" exitCode=0 Dec 06 10:22:30 crc kubenswrapper[4895]: I1206 10:22:30.434810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c"} Dec 06 10:22:30 crc kubenswrapper[4895]: I1206 10:22:30.434856 4895 scope.go:117] "RemoveContainer" containerID="31eff5f73ad3afdc32c8f687852778328cd0e3ddb549cd8a0e879e19c4d0ee4f" Dec 06 10:22:30 crc kubenswrapper[4895]: I1206 10:22:30.435525 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:22:30 crc kubenswrapper[4895]: E1206 10:22:30.435772 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:22:43 crc kubenswrapper[4895]: I1206 10:22:43.051104 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:22:43 crc kubenswrapper[4895]: E1206 10:22:43.052497 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:22:54 crc kubenswrapper[4895]: I1206 10:22:54.050650 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:22:54 crc kubenswrapper[4895]: E1206 10:22:54.051659 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:23:07 crc kubenswrapper[4895]: I1206 10:23:07.051875 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:23:07 crc kubenswrapper[4895]: E1206 10:23:07.052508 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:23:20 crc kubenswrapper[4895]: I1206 10:23:20.050783 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:23:20 crc kubenswrapper[4895]: E1206 10:23:20.051592 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:23:34 crc kubenswrapper[4895]: I1206 10:23:34.054220 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:23:34 crc kubenswrapper[4895]: E1206 10:23:34.055554 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:23:48 crc kubenswrapper[4895]: I1206 10:23:48.057453 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:23:48 crc kubenswrapper[4895]: E1206 10:23:48.058344 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:24:01 crc kubenswrapper[4895]: I1206 10:24:01.051085 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:24:01 crc kubenswrapper[4895]: E1206 10:24:01.051804 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:24:15 crc kubenswrapper[4895]: I1206 10:24:15.050908 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:24:15 crc kubenswrapper[4895]: E1206 10:24:15.052010 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:24:30 crc kubenswrapper[4895]: I1206 10:24:30.051729 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:24:30 crc kubenswrapper[4895]: E1206 10:24:30.052961 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:24:44 crc kubenswrapper[4895]: I1206 10:24:44.082672 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:24:44 crc kubenswrapper[4895]: E1206 10:24:44.083808 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:24:58 crc kubenswrapper[4895]: I1206 10:24:58.058297 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:24:58 crc kubenswrapper[4895]: E1206 10:24:58.059177 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.957152 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dbrks"] Dec 06 10:24:59 crc kubenswrapper[4895]: E1206 10:24:59.958037 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="extract-content" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.958086 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="extract-content" Dec 06 10:24:59 crc kubenswrapper[4895]: E1206 10:24:59.958126 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="registry-server" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.958136 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="registry-server" Dec 06 10:24:59 crc kubenswrapper[4895]: E1206 10:24:59.958154 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="extract-utilities" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.958162 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="extract-utilities" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.958437 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="78edeb7b-e89e-426f-a615-22c3d31ee0f6" containerName="registry-server" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.962453 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:24:59 crc kubenswrapper[4895]: I1206 10:24:59.973741 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbrks"] Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.046429 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-catalog-content\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.046593 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-utilities\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.047031 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jcmn\" (UniqueName: \"kubernetes.io/projected/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-kube-api-access-4jcmn\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.169176 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-utilities\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.170044 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-utilities\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.170804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jcmn\" (UniqueName: \"kubernetes.io/projected/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-kube-api-access-4jcmn\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.171096 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-catalog-content\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.171644 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-catalog-content\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.204166 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jcmn\" (UniqueName: \"kubernetes.io/projected/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-kube-api-access-4jcmn\") pod \"redhat-operators-dbrks\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.299179 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:00 crc kubenswrapper[4895]: I1206 10:25:00.883857 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbrks"] Dec 06 10:25:01 crc kubenswrapper[4895]: I1206 10:25:01.191068 4895 generic.go:334] "Generic (PLEG): container finished" podID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerID="195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355" exitCode=0 Dec 06 10:25:01 crc kubenswrapper[4895]: I1206 10:25:01.191188 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerDied","Data":"195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355"} Dec 06 10:25:01 crc kubenswrapper[4895]: I1206 10:25:01.191332 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerStarted","Data":"a475fbdd7e78eacc6c3ad6d64708b3487e53112cfc548b4f68b06109ba8cc0cf"} Dec 06 10:25:01 crc kubenswrapper[4895]: I1206 10:25:01.193071 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:25:02 crc kubenswrapper[4895]: I1206 10:25:02.202430 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerStarted","Data":"06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e"} Dec 06 10:25:04 crc kubenswrapper[4895]: I1206 10:25:04.231725 4895 generic.go:334] "Generic (PLEG): container finished" podID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerID="06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e" exitCode=0 Dec 06 10:25:04 crc kubenswrapper[4895]: I1206 10:25:04.232145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerDied","Data":"06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e"} Dec 06 10:25:05 crc kubenswrapper[4895]: I1206 10:25:05.268100 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerStarted","Data":"9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554"} Dec 06 10:25:05 crc kubenswrapper[4895]: I1206 10:25:05.286390 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dbrks" podStartSLOduration=2.821016725 podStartE2EDuration="6.28636153s" podCreationTimestamp="2025-12-06 10:24:59 +0000 UTC" firstStartedPulling="2025-12-06 10:25:01.192637117 +0000 UTC m=+12463.594025987" lastFinishedPulling="2025-12-06 10:25:04.657981882 +0000 UTC m=+12467.059370792" observedRunningTime="2025-12-06 10:25:05.284949302 +0000 UTC m=+12467.686338182" watchObservedRunningTime="2025-12-06 10:25:05.28636153 +0000 UTC m=+12467.687750400" Dec 06 10:25:10 crc kubenswrapper[4895]: I1206 10:25:10.299573 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:10 crc kubenswrapper[4895]: I1206 10:25:10.300020 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:11 crc kubenswrapper[4895]: I1206 10:25:11.051512 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:25:11 crc kubenswrapper[4895]: E1206 10:25:11.052320 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:25:11 crc kubenswrapper[4895]: I1206 10:25:11.381354 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dbrks" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="registry-server" probeResult="failure" output=< Dec 06 10:25:11 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 10:25:11 crc kubenswrapper[4895]: > Dec 06 10:25:20 crc kubenswrapper[4895]: I1206 10:25:20.395788 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:20 crc kubenswrapper[4895]: I1206 10:25:20.476439 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:20 crc kubenswrapper[4895]: I1206 10:25:20.648677 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbrks"] Dec 06 10:25:21 crc kubenswrapper[4895]: I1206 10:25:21.484092 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dbrks" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="registry-server" containerID="cri-o://9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554" gracePeriod=2 Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.027977 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.193574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-catalog-content\") pod \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.193670 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-utilities\") pod \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.193784 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jcmn\" (UniqueName: \"kubernetes.io/projected/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-kube-api-access-4jcmn\") pod \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\" (UID: \"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc\") " Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.194609 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-utilities" (OuterVolumeSpecName: "utilities") pod "46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" (UID: "46d1cc68-8d0c-43a1-9005-64cbdaba1ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.204873 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-kube-api-access-4jcmn" (OuterVolumeSpecName: "kube-api-access-4jcmn") pod "46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" (UID: "46d1cc68-8d0c-43a1-9005-64cbdaba1ddc"). InnerVolumeSpecName "kube-api-access-4jcmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.299099 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.299172 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jcmn\" (UniqueName: \"kubernetes.io/projected/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-kube-api-access-4jcmn\") on node \"crc\" DevicePath \"\"" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.308206 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" (UID: "46d1cc68-8d0c-43a1-9005-64cbdaba1ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.402034 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.500073 4895 generic.go:334] "Generic (PLEG): container finished" podID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerID="9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554" exitCode=0 Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.500196 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerDied","Data":"9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554"} Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.500252 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbrks" event={"ID":"46d1cc68-8d0c-43a1-9005-64cbdaba1ddc","Type":"ContainerDied","Data":"a475fbdd7e78eacc6c3ad6d64708b3487e53112cfc548b4f68b06109ba8cc0cf"} Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.500293 4895 scope.go:117] "RemoveContainer" containerID="9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.500715 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbrks" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.545504 4895 scope.go:117] "RemoveContainer" containerID="06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.549351 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbrks"] Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.559465 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dbrks"] Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.567887 4895 scope.go:117] "RemoveContainer" containerID="195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.671422 4895 scope.go:117] "RemoveContainer" containerID="9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554" Dec 06 10:25:22 crc kubenswrapper[4895]: E1206 10:25:22.671846 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554\": container with ID starting with 9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554 not found: ID does not exist" containerID="9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.671885 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554"} err="failed to get container status \"9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554\": rpc error: code = NotFound desc = could not find container \"9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554\": container with ID starting with 9dd5a421bbc25e78447fb1d26cef96944d2f0751f9d6d837e369d0e210033554 not found: ID does not exist" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.671905 4895 scope.go:117] "RemoveContainer" containerID="06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e" Dec 06 10:25:22 crc kubenswrapper[4895]: E1206 10:25:22.672239 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e\": container with ID starting with 06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e not found: ID does not exist" containerID="06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.672259 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e"} err="failed to get container status \"06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e\": rpc error: code = NotFound desc = could not find container \"06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e\": container with ID starting with 06365cf614e3cafd0d4f669eefcb3365265eb5a0ad2df29b371426f48a219e2e not found: ID does not exist" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.672271 4895 scope.go:117] "RemoveContainer" containerID="195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355" Dec 06 10:25:22 crc kubenswrapper[4895]: E1206 10:25:22.672458 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355\": container with ID starting with 195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355 not found: ID does not exist" containerID="195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355" Dec 06 10:25:22 crc kubenswrapper[4895]: I1206 10:25:22.672484 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355"} err="failed to get container status \"195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355\": rpc error: code = NotFound desc = could not find container \"195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355\": container with ID starting with 195adb210e8f64dcc5734d63634bd0e005f87dc79a0d5bfb8fb731ac327d8355 not found: ID does not exist" Dec 06 10:25:23 crc kubenswrapper[4895]: I1206 10:25:23.051670 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:25:23 crc kubenswrapper[4895]: E1206 10:25:23.052543 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:25:24 crc kubenswrapper[4895]: I1206 10:25:24.071335 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" path="/var/lib/kubelet/pods/46d1cc68-8d0c-43a1-9005-64cbdaba1ddc/volumes" Dec 06 10:25:35 crc kubenswrapper[4895]: I1206 10:25:35.051016 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:25:35 crc kubenswrapper[4895]: E1206 10:25:35.051810 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:25:46 crc kubenswrapper[4895]: I1206 10:25:46.051601 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:25:46 crc kubenswrapper[4895]: E1206 10:25:46.053196 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:26:01 crc kubenswrapper[4895]: I1206 10:26:01.051516 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:26:01 crc kubenswrapper[4895]: E1206 10:26:01.052170 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:26:15 crc kubenswrapper[4895]: I1206 10:26:15.050723 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:26:15 crc kubenswrapper[4895]: E1206 10:26:15.051523 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:26:26 crc kubenswrapper[4895]: I1206 10:26:26.050938 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:26:26 crc kubenswrapper[4895]: E1206 10:26:26.051935 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:26:39 crc kubenswrapper[4895]: I1206 10:26:39.051375 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:26:39 crc kubenswrapper[4895]: E1206 10:26:39.052190 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:26:54 crc kubenswrapper[4895]: I1206 10:26:54.051452 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:26:54 crc kubenswrapper[4895]: E1206 10:26:54.053076 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:27:05 crc kubenswrapper[4895]: I1206 10:27:05.051081 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:27:05 crc kubenswrapper[4895]: E1206 10:27:05.052000 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:27:20 crc kubenswrapper[4895]: I1206 10:27:20.050942 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:27:20 crc kubenswrapper[4895]: E1206 10:27:20.051830 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:27:32 crc kubenswrapper[4895]: I1206 10:27:32.050985 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:27:33 crc kubenswrapper[4895]: I1206 10:27:33.085278 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"730b3a4444e58f09776a64d590f44494a4d6b1a8ef3b7d600f4ca508d222bf9d"} Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.070639 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xs99f"] Dec 06 10:29:02 crc kubenswrapper[4895]: E1206 10:29:02.071465 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="extract-utilities" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.071490 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="extract-utilities" Dec 06 10:29:02 crc kubenswrapper[4895]: E1206 10:29:02.071512 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="extract-content" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.071518 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="extract-content" Dec 06 10:29:02 crc kubenswrapper[4895]: E1206 10:29:02.071535 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="registry-server" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.071540 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="registry-server" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.071741 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d1cc68-8d0c-43a1-9005-64cbdaba1ddc" containerName="registry-server" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.073115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.088709 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs99f"] Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.195566 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bqq\" (UniqueName: \"kubernetes.io/projected/8afac80f-0b94-449e-8452-e0db53a61052-kube-api-access-h2bqq\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.195641 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-catalog-content\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.195676 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-utilities\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.297756 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2bqq\" (UniqueName: \"kubernetes.io/projected/8afac80f-0b94-449e-8452-e0db53a61052-kube-api-access-h2bqq\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.297821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-catalog-content\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.297855 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-utilities\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.298317 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-utilities\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.298405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-catalog-content\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.319862 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2bqq\" (UniqueName: \"kubernetes.io/projected/8afac80f-0b94-449e-8452-e0db53a61052-kube-api-access-h2bqq\") pod \"redhat-marketplace-xs99f\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.390924 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:02 crc kubenswrapper[4895]: I1206 10:29:02.954619 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs99f"] Dec 06 10:29:03 crc kubenswrapper[4895]: I1206 10:29:03.364084 4895 generic.go:334] "Generic (PLEG): container finished" podID="8afac80f-0b94-449e-8452-e0db53a61052" containerID="339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d" exitCode=0 Dec 06 10:29:03 crc kubenswrapper[4895]: I1206 10:29:03.364127 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerDied","Data":"339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d"} Dec 06 10:29:03 crc kubenswrapper[4895]: I1206 10:29:03.364154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerStarted","Data":"f7d4c0e475cef2219e29eb769eeb4b63ce8a1989dfbe79bfd33b9834436384f2"} Dec 06 10:29:04 crc kubenswrapper[4895]: I1206 10:29:04.379886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerStarted","Data":"0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf"} Dec 06 10:29:05 crc kubenswrapper[4895]: I1206 10:29:05.392696 4895 generic.go:334] "Generic (PLEG): container finished" podID="8afac80f-0b94-449e-8452-e0db53a61052" containerID="0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf" exitCode=0 Dec 06 10:29:05 crc kubenswrapper[4895]: I1206 10:29:05.392988 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerDied","Data":"0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf"} Dec 06 10:29:06 crc kubenswrapper[4895]: I1206 10:29:06.416781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerStarted","Data":"7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244"} Dec 06 10:29:06 crc kubenswrapper[4895]: I1206 10:29:06.448336 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xs99f" podStartSLOduration=1.992419001 podStartE2EDuration="4.448316605s" podCreationTimestamp="2025-12-06 10:29:02 +0000 UTC" firstStartedPulling="2025-12-06 10:29:03.366622553 +0000 UTC m=+12705.768011453" lastFinishedPulling="2025-12-06 10:29:05.822520177 +0000 UTC m=+12708.223909057" observedRunningTime="2025-12-06 10:29:06.440217386 +0000 UTC m=+12708.841606276" watchObservedRunningTime="2025-12-06 10:29:06.448316605 +0000 UTC m=+12708.849705485" Dec 06 10:29:12 crc kubenswrapper[4895]: I1206 10:29:12.392002 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:12 crc kubenswrapper[4895]: I1206 10:29:12.392994 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:12 crc kubenswrapper[4895]: I1206 10:29:12.476466 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:12 crc kubenswrapper[4895]: I1206 10:29:12.577790 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:12 crc kubenswrapper[4895]: I1206 10:29:12.734725 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs99f"] Dec 06 10:29:14 crc kubenswrapper[4895]: I1206 10:29:14.544603 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xs99f" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="registry-server" containerID="cri-o://7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244" gracePeriod=2 Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.189756 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.262618 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-utilities\") pod \"8afac80f-0b94-449e-8452-e0db53a61052\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.262830 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2bqq\" (UniqueName: \"kubernetes.io/projected/8afac80f-0b94-449e-8452-e0db53a61052-kube-api-access-h2bqq\") pod \"8afac80f-0b94-449e-8452-e0db53a61052\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.262997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-catalog-content\") pod \"8afac80f-0b94-449e-8452-e0db53a61052\" (UID: \"8afac80f-0b94-449e-8452-e0db53a61052\") " Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.264198 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-utilities" (OuterVolumeSpecName: "utilities") pod "8afac80f-0b94-449e-8452-e0db53a61052" (UID: "8afac80f-0b94-449e-8452-e0db53a61052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.272673 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8afac80f-0b94-449e-8452-e0db53a61052-kube-api-access-h2bqq" (OuterVolumeSpecName: "kube-api-access-h2bqq") pod "8afac80f-0b94-449e-8452-e0db53a61052" (UID: "8afac80f-0b94-449e-8452-e0db53a61052"). InnerVolumeSpecName "kube-api-access-h2bqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.289624 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8afac80f-0b94-449e-8452-e0db53a61052" (UID: "8afac80f-0b94-449e-8452-e0db53a61052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.365910 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.365942 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2bqq\" (UniqueName: \"kubernetes.io/projected/8afac80f-0b94-449e-8452-e0db53a61052-kube-api-access-h2bqq\") on node \"crc\" DevicePath \"\"" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.365954 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8afac80f-0b94-449e-8452-e0db53a61052-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.575718 4895 generic.go:334] "Generic (PLEG): container finished" podID="8afac80f-0b94-449e-8452-e0db53a61052" containerID="7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244" exitCode=0 Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.575829 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerDied","Data":"7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244"} Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.575863 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs99f" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.575892 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs99f" event={"ID":"8afac80f-0b94-449e-8452-e0db53a61052","Type":"ContainerDied","Data":"f7d4c0e475cef2219e29eb769eeb4b63ce8a1989dfbe79bfd33b9834436384f2"} Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.575942 4895 scope.go:117] "RemoveContainer" containerID="7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.614452 4895 scope.go:117] "RemoveContainer" containerID="0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.634918 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs99f"] Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.647352 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs99f"] Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.668800 4895 scope.go:117] "RemoveContainer" containerID="339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.722315 4895 scope.go:117] "RemoveContainer" containerID="7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244" Dec 06 10:29:15 crc kubenswrapper[4895]: E1206 10:29:15.724745 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244\": container with ID starting with 7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244 not found: ID does not exist" containerID="7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.724797 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244"} err="failed to get container status \"7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244\": rpc error: code = NotFound desc = could not find container \"7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244\": container with ID starting with 7c23e740bba9c572487fb4e92149da739482d798705d3217f5f1a6a96ff98244 not found: ID does not exist" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.724828 4895 scope.go:117] "RemoveContainer" containerID="0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf" Dec 06 10:29:15 crc kubenswrapper[4895]: E1206 10:29:15.725510 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf\": container with ID starting with 0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf not found: ID does not exist" containerID="0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.725578 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf"} err="failed to get container status \"0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf\": rpc error: code = NotFound desc = could not find container \"0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf\": container with ID starting with 0ed69dd3086c4c5b5fa53af17edf1ab9474631e36f3999e3b0a8adc8a45d95bf not found: ID does not exist" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.725748 4895 scope.go:117] "RemoveContainer" containerID="339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d" Dec 06 10:29:15 crc kubenswrapper[4895]: E1206 10:29:15.726262 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d\": container with ID starting with 339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d not found: ID does not exist" containerID="339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d" Dec 06 10:29:15 crc kubenswrapper[4895]: I1206 10:29:15.726294 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d"} err="failed to get container status \"339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d\": rpc error: code = NotFound desc = could not find container \"339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d\": container with ID starting with 339de40259467cfb907a863fe05116ed88ef5e76c294b4f8021513375aa9fb1d not found: ID does not exist" Dec 06 10:29:16 crc kubenswrapper[4895]: I1206 10:29:16.069780 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8afac80f-0b94-449e-8452-e0db53a61052" path="/var/lib/kubelet/pods/8afac80f-0b94-449e-8452-e0db53a61052/volumes" Dec 06 10:29:59 crc kubenswrapper[4895]: I1206 10:29:59.696048 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:29:59 crc kubenswrapper[4895]: I1206 10:29:59.696885 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.179704 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb"] Dec 06 10:30:00 crc kubenswrapper[4895]: E1206 10:30:00.180224 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="extract-content" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.180254 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="extract-content" Dec 06 10:30:00 crc kubenswrapper[4895]: E1206 10:30:00.180301 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="extract-utilities" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.180311 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="extract-utilities" Dec 06 10:30:00 crc kubenswrapper[4895]: E1206 10:30:00.180339 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="registry-server" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.180347 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="registry-server" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.180646 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afac80f-0b94-449e-8452-e0db53a61052" containerName="registry-server" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.181662 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.184221 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.184435 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.206233 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb"] Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.347444 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-secret-volume\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.347786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-config-volume\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.347885 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmld4\" (UniqueName: \"kubernetes.io/projected/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-kube-api-access-xmld4\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.450828 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-config-volume\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.450945 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmld4\" (UniqueName: \"kubernetes.io/projected/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-kube-api-access-xmld4\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.451201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-secret-volume\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.452015 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-config-volume\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.462348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-secret-volume\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.481320 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmld4\" (UniqueName: \"kubernetes.io/projected/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-kube-api-access-xmld4\") pod \"collect-profiles-29416950-qxgwb\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:00 crc kubenswrapper[4895]: I1206 10:30:00.512734 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:01 crc kubenswrapper[4895]: I1206 10:30:01.032944 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb"] Dec 06 10:30:01 crc kubenswrapper[4895]: I1206 10:30:01.269933 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" event={"ID":"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d","Type":"ContainerStarted","Data":"9ccf0a8732f9a0eaf04ce6f7b778256ba295c28e67834aae6af586d5ea58b8d1"} Dec 06 10:30:01 crc kubenswrapper[4895]: I1206 10:30:01.270271 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" event={"ID":"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d","Type":"ContainerStarted","Data":"b454fb28dadebf5d2d0dc27a917d3faca6cc2f2a314b9dc4dcfb030a11f644ae"} Dec 06 10:30:01 crc kubenswrapper[4895]: I1206 10:30:01.292895 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" podStartSLOduration=1.292854521 podStartE2EDuration="1.292854521s" podCreationTimestamp="2025-12-06 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 10:30:01.281203686 +0000 UTC m=+12763.682592596" watchObservedRunningTime="2025-12-06 10:30:01.292854521 +0000 UTC m=+12763.694243401" Dec 06 10:30:02 crc kubenswrapper[4895]: I1206 10:30:02.302803 4895 generic.go:334] "Generic (PLEG): container finished" podID="8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" containerID="9ccf0a8732f9a0eaf04ce6f7b778256ba295c28e67834aae6af586d5ea58b8d1" exitCode=0 Dec 06 10:30:02 crc kubenswrapper[4895]: I1206 10:30:02.305058 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" event={"ID":"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d","Type":"ContainerDied","Data":"9ccf0a8732f9a0eaf04ce6f7b778256ba295c28e67834aae6af586d5ea58b8d1"} Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.841314 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.946736 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-secret-volume\") pod \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.947452 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-config-volume\") pod \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.947727 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmld4\" (UniqueName: \"kubernetes.io/projected/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-kube-api-access-xmld4\") pod \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\" (UID: \"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d\") " Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.948044 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" (UID: "8e87c25e-4a16-42a8-8e19-da5bed8f4b3d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.948832 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.956997 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-kube-api-access-xmld4" (OuterVolumeSpecName: "kube-api-access-xmld4") pod "8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" (UID: "8e87c25e-4a16-42a8-8e19-da5bed8f4b3d"). InnerVolumeSpecName "kube-api-access-xmld4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:30:03 crc kubenswrapper[4895]: I1206 10:30:03.960195 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" (UID: "8e87c25e-4a16-42a8-8e19-da5bed8f4b3d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.051148 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmld4\" (UniqueName: \"kubernetes.io/projected/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-kube-api-access-xmld4\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.051191 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e87c25e-4a16-42a8-8e19-da5bed8f4b3d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.348207 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" event={"ID":"8e87c25e-4a16-42a8-8e19-da5bed8f4b3d","Type":"ContainerDied","Data":"b454fb28dadebf5d2d0dc27a917d3faca6cc2f2a314b9dc4dcfb030a11f644ae"} Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.348266 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b454fb28dadebf5d2d0dc27a917d3faca6cc2f2a314b9dc4dcfb030a11f644ae" Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.348372 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416950-qxgwb" Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.394009 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn"] Dec 06 10:30:04 crc kubenswrapper[4895]: I1206 10:30:04.406739 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416905-mfgnn"] Dec 06 10:30:06 crc kubenswrapper[4895]: I1206 10:30:06.075184 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699a968e-8c28-41c0-a326-31306d4cbab6" path="/var/lib/kubelet/pods/699a968e-8c28-41c0-a326-31306d4cbab6/volumes" Dec 06 10:30:23 crc kubenswrapper[4895]: I1206 10:30:23.173180 4895 scope.go:117] "RemoveContainer" containerID="d90d81014beba77f9e6cb53066bddbfbb60057498effe571874292fd5399de3d" Dec 06 10:30:29 crc kubenswrapper[4895]: I1206 10:30:29.695666 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:30:29 crc kubenswrapper[4895]: I1206 10:30:29.696243 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.236808 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g8529"] Dec 06 10:30:43 crc kubenswrapper[4895]: E1206 10:30:43.238098 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" containerName="collect-profiles" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.238116 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" containerName="collect-profiles" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.238454 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e87c25e-4a16-42a8-8e19-da5bed8f4b3d" containerName="collect-profiles" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.240321 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.252044 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8529"] Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.425358 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zgvfj"] Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.426410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-catalog-content\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.426493 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-utilities\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.427391 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmg5c\" (UniqueName: \"kubernetes.io/projected/95a58e9f-5cc1-4d73-bcb2-117c47defda8-kube-api-access-kmg5c\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.428440 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.437131 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zgvfj"] Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.528994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-catalog-content\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.529152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-utilities\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.529312 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmg5c\" (UniqueName: \"kubernetes.io/projected/95a58e9f-5cc1-4d73-bcb2-117c47defda8-kube-api-access-kmg5c\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.529646 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-catalog-content\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.529688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-utilities\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.555983 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmg5c\" (UniqueName: \"kubernetes.io/projected/95a58e9f-5cc1-4d73-bcb2-117c47defda8-kube-api-access-kmg5c\") pod \"community-operators-g8529\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.570451 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.630614 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-utilities\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.630953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9db\" (UniqueName: \"kubernetes.io/projected/37c755af-11bd-4b84-8d03-ef378816783a-kube-api-access-7x9db\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.630987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-catalog-content\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.734805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9db\" (UniqueName: \"kubernetes.io/projected/37c755af-11bd-4b84-8d03-ef378816783a-kube-api-access-7x9db\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.735130 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-catalog-content\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.735279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-utilities\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.735756 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-utilities\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.735827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-catalog-content\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.754347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9db\" (UniqueName: \"kubernetes.io/projected/37c755af-11bd-4b84-8d03-ef378816783a-kube-api-access-7x9db\") pod \"certified-operators-zgvfj\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:43 crc kubenswrapper[4895]: I1206 10:30:43.767881 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.215006 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8529"] Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.490835 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zgvfj"] Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.938605 4895 generic.go:334] "Generic (PLEG): container finished" podID="37c755af-11bd-4b84-8d03-ef378816783a" containerID="bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7" exitCode=0 Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.938699 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerDied","Data":"bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7"} Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.939074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerStarted","Data":"cb7774daa3a1d057db7bcdaebb5fc6c514ff7b0cd4c0b6a250f35d07106d17df"} Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.942912 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.943346 4895 generic.go:334] "Generic (PLEG): container finished" podID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerID="d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315" exitCode=0 Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.943381 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerDied","Data":"d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315"} Dec 06 10:30:44 crc kubenswrapper[4895]: I1206 10:30:44.943408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerStarted","Data":"1a1a40db29231899d74e36857a0d53132f40e11c276e18ae24aad6e9ac11d692"} Dec 06 10:30:45 crc kubenswrapper[4895]: I1206 10:30:45.957060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerStarted","Data":"d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2"} Dec 06 10:30:45 crc kubenswrapper[4895]: I1206 10:30:45.960586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerStarted","Data":"2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308"} Dec 06 10:30:46 crc kubenswrapper[4895]: I1206 10:30:46.973891 4895 generic.go:334] "Generic (PLEG): container finished" podID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerID="2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308" exitCode=0 Dec 06 10:30:46 crc kubenswrapper[4895]: I1206 10:30:46.973993 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerDied","Data":"2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308"} Dec 06 10:30:46 crc kubenswrapper[4895]: I1206 10:30:46.977901 4895 generic.go:334] "Generic (PLEG): container finished" podID="37c755af-11bd-4b84-8d03-ef378816783a" containerID="d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2" exitCode=0 Dec 06 10:30:46 crc kubenswrapper[4895]: I1206 10:30:46.977957 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerDied","Data":"d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2"} Dec 06 10:30:47 crc kubenswrapper[4895]: I1206 10:30:47.993641 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerStarted","Data":"d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617"} Dec 06 10:30:47 crc kubenswrapper[4895]: I1206 10:30:47.997118 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerStarted","Data":"72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d"} Dec 06 10:30:48 crc kubenswrapper[4895]: I1206 10:30:48.025894 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zgvfj" podStartSLOduration=2.588929298 podStartE2EDuration="5.02587771s" podCreationTimestamp="2025-12-06 10:30:43 +0000 UTC" firstStartedPulling="2025-12-06 10:30:44.942450321 +0000 UTC m=+12807.343839211" lastFinishedPulling="2025-12-06 10:30:47.379398713 +0000 UTC m=+12809.780787623" observedRunningTime="2025-12-06 10:30:48.018537932 +0000 UTC m=+12810.419926802" watchObservedRunningTime="2025-12-06 10:30:48.02587771 +0000 UTC m=+12810.427266580" Dec 06 10:30:48 crc kubenswrapper[4895]: I1206 10:30:48.068869 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g8529" podStartSLOduration=2.587837547 podStartE2EDuration="5.06884486s" podCreationTimestamp="2025-12-06 10:30:43 +0000 UTC" firstStartedPulling="2025-12-06 10:30:44.945128913 +0000 UTC m=+12807.346517793" lastFinishedPulling="2025-12-06 10:30:47.426136226 +0000 UTC m=+12809.827525106" observedRunningTime="2025-12-06 10:30:48.046397984 +0000 UTC m=+12810.447786864" watchObservedRunningTime="2025-12-06 10:30:48.06884486 +0000 UTC m=+12810.470233770" Dec 06 10:30:53 crc kubenswrapper[4895]: I1206 10:30:53.571135 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:53 crc kubenswrapper[4895]: I1206 10:30:53.571795 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:53 crc kubenswrapper[4895]: I1206 10:30:53.622984 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:53 crc kubenswrapper[4895]: I1206 10:30:53.768514 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:53 crc kubenswrapper[4895]: I1206 10:30:53.768564 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:53 crc kubenswrapper[4895]: I1206 10:30:53.841453 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:54 crc kubenswrapper[4895]: I1206 10:30:54.123835 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:54 crc kubenswrapper[4895]: I1206 10:30:54.132075 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:58 crc kubenswrapper[4895]: I1206 10:30:58.221103 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8529"] Dec 06 10:30:58 crc kubenswrapper[4895]: I1206 10:30:58.221852 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g8529" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="registry-server" containerID="cri-o://72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d" gracePeriod=2 Dec 06 10:30:58 crc kubenswrapper[4895]: I1206 10:30:58.623635 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zgvfj"] Dec 06 10:30:58 crc kubenswrapper[4895]: I1206 10:30:58.624192 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zgvfj" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="registry-server" containerID="cri-o://d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617" gracePeriod=2 Dec 06 10:30:58 crc kubenswrapper[4895]: I1206 10:30:58.907554 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.038146 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-utilities\") pod \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.038382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmg5c\" (UniqueName: \"kubernetes.io/projected/95a58e9f-5cc1-4d73-bcb2-117c47defda8-kube-api-access-kmg5c\") pod \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.038564 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-catalog-content\") pod \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\" (UID: \"95a58e9f-5cc1-4d73-bcb2-117c47defda8\") " Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.039073 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-utilities" (OuterVolumeSpecName: "utilities") pod "95a58e9f-5cc1-4d73-bcb2-117c47defda8" (UID: "95a58e9f-5cc1-4d73-bcb2-117c47defda8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.048913 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a58e9f-5cc1-4d73-bcb2-117c47defda8-kube-api-access-kmg5c" (OuterVolumeSpecName: "kube-api-access-kmg5c") pod "95a58e9f-5cc1-4d73-bcb2-117c47defda8" (UID: "95a58e9f-5cc1-4d73-bcb2-117c47defda8"). InnerVolumeSpecName "kube-api-access-kmg5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.097753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95a58e9f-5cc1-4d73-bcb2-117c47defda8" (UID: "95a58e9f-5cc1-4d73-bcb2-117c47defda8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.122039 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.140666 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.140891 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmg5c\" (UniqueName: \"kubernetes.io/projected/95a58e9f-5cc1-4d73-bcb2-117c47defda8-kube-api-access-kmg5c\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.141001 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a58e9f-5cc1-4d73-bcb2-117c47defda8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.151222 4895 generic.go:334] "Generic (PLEG): container finished" podID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerID="72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d" exitCode=0 Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.151276 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerDied","Data":"72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d"} Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.151303 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8529" event={"ID":"95a58e9f-5cc1-4d73-bcb2-117c47defda8","Type":"ContainerDied","Data":"1a1a40db29231899d74e36857a0d53132f40e11c276e18ae24aad6e9ac11d692"} Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.151319 4895 scope.go:117] "RemoveContainer" containerID="72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.151452 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8529" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.159082 4895 generic.go:334] "Generic (PLEG): container finished" podID="37c755af-11bd-4b84-8d03-ef378816783a" containerID="d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617" exitCode=0 Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.159125 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerDied","Data":"d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617"} Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.159133 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgvfj" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.159149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgvfj" event={"ID":"37c755af-11bd-4b84-8d03-ef378816783a","Type":"ContainerDied","Data":"cb7774daa3a1d057db7bcdaebb5fc6c514ff7b0cd4c0b6a250f35d07106d17df"} Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.191578 4895 scope.go:117] "RemoveContainer" containerID="2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.192437 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8529"] Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.202448 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g8529"] Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.217226 4895 scope.go:117] "RemoveContainer" containerID="d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.241894 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-utilities\") pod \"37c755af-11bd-4b84-8d03-ef378816783a\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.241954 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-catalog-content\") pod \"37c755af-11bd-4b84-8d03-ef378816783a\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.242020 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x9db\" (UniqueName: \"kubernetes.io/projected/37c755af-11bd-4b84-8d03-ef378816783a-kube-api-access-7x9db\") pod \"37c755af-11bd-4b84-8d03-ef378816783a\" (UID: \"37c755af-11bd-4b84-8d03-ef378816783a\") " Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.243315 4895 scope.go:117] "RemoveContainer" containerID="72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.244051 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-utilities" (OuterVolumeSpecName: "utilities") pod "37c755af-11bd-4b84-8d03-ef378816783a" (UID: "37c755af-11bd-4b84-8d03-ef378816783a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:30:59 crc kubenswrapper[4895]: E1206 10:30:59.244100 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d\": container with ID starting with 72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d not found: ID does not exist" containerID="72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.244152 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d"} err="failed to get container status \"72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d\": rpc error: code = NotFound desc = could not find container \"72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d\": container with ID starting with 72f24d604125191a2a592b5ed22cfee0bb5a9dd1f267b374db463a19a35b207d not found: ID does not exist" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.244181 4895 scope.go:117] "RemoveContainer" containerID="2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308" Dec 06 10:30:59 crc kubenswrapper[4895]: E1206 10:30:59.244580 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308\": container with ID starting with 2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308 not found: ID does not exist" containerID="2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.244619 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308"} err="failed to get container status \"2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308\": rpc error: code = NotFound desc = could not find container \"2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308\": container with ID starting with 2fbb3db3f4ec703f6330b1a4b5cb2c6e1517d9db2f15be5112e992962ee61308 not found: ID does not exist" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.244640 4895 scope.go:117] "RemoveContainer" containerID="d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315" Dec 06 10:30:59 crc kubenswrapper[4895]: E1206 10:30:59.244989 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315\": container with ID starting with d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315 not found: ID does not exist" containerID="d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.245030 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315"} err="failed to get container status \"d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315\": rpc error: code = NotFound desc = could not find container \"d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315\": container with ID starting with d17b1e6fdbd31d46af1479761b9e72e01f6cf9d4c53e53b6a50ebfb3f845e315 not found: ID does not exist" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.245056 4895 scope.go:117] "RemoveContainer" containerID="d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.247591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c755af-11bd-4b84-8d03-ef378816783a-kube-api-access-7x9db" (OuterVolumeSpecName: "kube-api-access-7x9db") pod "37c755af-11bd-4b84-8d03-ef378816783a" (UID: "37c755af-11bd-4b84-8d03-ef378816783a"). InnerVolumeSpecName "kube-api-access-7x9db". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.283077 4895 scope.go:117] "RemoveContainer" containerID="d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.307029 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37c755af-11bd-4b84-8d03-ef378816783a" (UID: "37c755af-11bd-4b84-8d03-ef378816783a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.313435 4895 scope.go:117] "RemoveContainer" containerID="bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.336715 4895 scope.go:117] "RemoveContainer" containerID="d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617" Dec 06 10:30:59 crc kubenswrapper[4895]: E1206 10:30:59.337166 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617\": container with ID starting with d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617 not found: ID does not exist" containerID="d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.337223 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617"} err="failed to get container status \"d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617\": rpc error: code = NotFound desc = could not find container \"d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617\": container with ID starting with d08182840a84dd8f1e16d1aec55c3094c5d475848caba9e07944db85060a1617 not found: ID does not exist" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.337256 4895 scope.go:117] "RemoveContainer" containerID="d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2" Dec 06 10:30:59 crc kubenswrapper[4895]: E1206 10:30:59.337568 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2\": container with ID starting with d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2 not found: ID does not exist" containerID="d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.337599 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2"} err="failed to get container status \"d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2\": rpc error: code = NotFound desc = could not find container \"d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2\": container with ID starting with d04fd1d2f23d203a9016bd644b0f5dd54e851ae1f25206535a084a827a9b3fb2 not found: ID does not exist" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.337619 4895 scope.go:117] "RemoveContainer" containerID="bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7" Dec 06 10:30:59 crc kubenswrapper[4895]: E1206 10:30:59.337916 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7\": container with ID starting with bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7 not found: ID does not exist" containerID="bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.337953 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7"} err="failed to get container status \"bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7\": rpc error: code = NotFound desc = could not find container \"bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7\": container with ID starting with bba07e0f352d76587baf0336a72e04c57d409f1e2b8cdb205f206467c5729dd7 not found: ID does not exist" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.344522 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.344565 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c755af-11bd-4b84-8d03-ef378816783a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.344580 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x9db\" (UniqueName: \"kubernetes.io/projected/37c755af-11bd-4b84-8d03-ef378816783a-kube-api-access-7x9db\") on node \"crc\" DevicePath \"\"" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.505670 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zgvfj"] Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.520280 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zgvfj"] Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.696342 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.696430 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.696510 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.697678 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"730b3a4444e58f09776a64d590f44494a4d6b1a8ef3b7d600f4ca508d222bf9d"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:30:59 crc kubenswrapper[4895]: I1206 10:30:59.697812 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://730b3a4444e58f09776a64d590f44494a4d6b1a8ef3b7d600f4ca508d222bf9d" gracePeriod=600 Dec 06 10:31:00 crc kubenswrapper[4895]: I1206 10:31:00.076143 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c755af-11bd-4b84-8d03-ef378816783a" path="/var/lib/kubelet/pods/37c755af-11bd-4b84-8d03-ef378816783a/volumes" Dec 06 10:31:00 crc kubenswrapper[4895]: I1206 10:31:00.078092 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" path="/var/lib/kubelet/pods/95a58e9f-5cc1-4d73-bcb2-117c47defda8/volumes" Dec 06 10:31:00 crc kubenswrapper[4895]: I1206 10:31:00.172578 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="730b3a4444e58f09776a64d590f44494a4d6b1a8ef3b7d600f4ca508d222bf9d" exitCode=0 Dec 06 10:31:00 crc kubenswrapper[4895]: I1206 10:31:00.172640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"730b3a4444e58f09776a64d590f44494a4d6b1a8ef3b7d600f4ca508d222bf9d"} Dec 06 10:31:00 crc kubenswrapper[4895]: I1206 10:31:00.172690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179"} Dec 06 10:31:00 crc kubenswrapper[4895]: I1206 10:31:00.172710 4895 scope.go:117] "RemoveContainer" containerID="e0f53e4272082147dc9c80923e646056ffaba703b205d9ea0a7d8c977922046c" Dec 06 10:31:06 crc kubenswrapper[4895]: I1206 10:31:06.432820 4895 generic.go:334] "Generic (PLEG): container finished" podID="6eaee9a4-ba6d-4285-823c-f90a59785cc6" containerID="75eec5b42e0a02465dfc49095bda469e004ee898e1e4e2068e61e95002ca0cfd" exitCode=0 Dec 06 10:31:06 crc kubenswrapper[4895]: I1206 10:31:06.432877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6eaee9a4-ba6d-4285-823c-f90a59785cc6","Type":"ContainerDied","Data":"75eec5b42e0a02465dfc49095bda469e004ee898e1e4e2068e61e95002ca0cfd"} Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.024050 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.133999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-temporary\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134066 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134107 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ssh-key\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134156 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134233 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config-secret\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ca-certs\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134292 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-workdir\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134453 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpdqs\" (UniqueName: \"kubernetes.io/projected/6eaee9a4-ba6d-4285-823c-f90a59785cc6-kube-api-access-lpdqs\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.134546 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-config-data\") pod \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\" (UID: \"6eaee9a4-ba6d-4285-823c-f90a59785cc6\") " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.135874 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.136012 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-config-data" (OuterVolumeSpecName: "config-data") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.164518 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.167168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.169350 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eaee9a4-ba6d-4285-823c-f90a59785cc6-kube-api-access-lpdqs" (OuterVolumeSpecName: "kube-api-access-lpdqs") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "kube-api-access-lpdqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.170133 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.194817 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.194863 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.199416 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6eaee9a4-ba6d-4285-823c-f90a59785cc6" (UID: "6eaee9a4-ba6d-4285-823c-f90a59785cc6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.237485 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.237714 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.237805 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.237883 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.237946 4895 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6eaee9a4-ba6d-4285-823c-f90a59785cc6-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.238008 4895 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.238061 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpdqs\" (UniqueName: \"kubernetes.io/projected/6eaee9a4-ba6d-4285-823c-f90a59785cc6-kube-api-access-lpdqs\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.238119 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6eaee9a4-ba6d-4285-823c-f90a59785cc6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.238172 4895 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6eaee9a4-ba6d-4285-823c-f90a59785cc6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.267790 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.340638 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.458158 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6eaee9a4-ba6d-4285-823c-f90a59785cc6","Type":"ContainerDied","Data":"57b63843ba5bfd610441596889229d276f42c6f56bf3af0cf3ba765042ad4f40"} Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.458197 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b63843ba5bfd610441596889229d276f42c6f56bf3af0cf3ba765042ad4f40" Dec 06 10:31:08 crc kubenswrapper[4895]: I1206 10:31:08.458255 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.833377 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834273 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eaee9a4-ba6d-4285-823c-f90a59785cc6" containerName="tempest-tests-tempest-tests-runner" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834284 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eaee9a4-ba6d-4285-823c-f90a59785cc6" containerName="tempest-tests-tempest-tests-runner" Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834303 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="extract-content" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834309 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="extract-content" Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834316 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="extract-utilities" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834322 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="extract-utilities" Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834334 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="extract-content" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834340 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="extract-content" Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834355 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="registry-server" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834361 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="registry-server" Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834393 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="extract-utilities" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834399 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="extract-utilities" Dec 06 10:31:20 crc kubenswrapper[4895]: E1206 10:31:20.834417 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="registry-server" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834422 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="registry-server" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834624 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eaee9a4-ba6d-4285-823c-f90a59785cc6" containerName="tempest-tests-tempest-tests-runner" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834639 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a58e9f-5cc1-4d73-bcb2-117c47defda8" containerName="registry-server" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.834661 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c755af-11bd-4b84-8d03-ef378816783a" containerName="registry-server" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.835382 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.837605 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-z2v77" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.868155 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.921141 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvgc\" (UniqueName: \"kubernetes.io/projected/d4338c71-33b3-405a-a259-0258c0836bb8-kube-api-access-2qvgc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:20 crc kubenswrapper[4895]: I1206 10:31:20.921619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.023783 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvgc\" (UniqueName: \"kubernetes.io/projected/d4338c71-33b3-405a-a259-0258c0836bb8-kube-api-access-2qvgc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.023941 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.024433 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.059619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.060987 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvgc\" (UniqueName: \"kubernetes.io/projected/d4338c71-33b3-405a-a259-0258c0836bb8-kube-api-access-2qvgc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d4338c71-33b3-405a-a259-0258c0836bb8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.266897 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 10:31:21 crc kubenswrapper[4895]: I1206 10:31:21.753998 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 10:31:22 crc kubenswrapper[4895]: I1206 10:31:22.633998 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d4338c71-33b3-405a-a259-0258c0836bb8","Type":"ContainerStarted","Data":"99dc08d307bb744fe8539d99af46c28e2e279530b3e3927f23276fcd6ed15fd6"} Dec 06 10:31:23 crc kubenswrapper[4895]: I1206 10:31:23.648561 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d4338c71-33b3-405a-a259-0258c0836bb8","Type":"ContainerStarted","Data":"fba93a472c0019fab26d357b9085460bad4a933bbe487258377171e3be7a2b72"} Dec 06 10:31:23 crc kubenswrapper[4895]: I1206 10:31:23.665653 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.713393854 podStartE2EDuration="3.665634576s" podCreationTimestamp="2025-12-06 10:31:20 +0000 UTC" firstStartedPulling="2025-12-06 10:31:21.757207015 +0000 UTC m=+12844.158595885" lastFinishedPulling="2025-12-06 10:31:22.709447737 +0000 UTC m=+12845.110836607" observedRunningTime="2025-12-06 10:31:23.663608932 +0000 UTC m=+12846.064997812" watchObservedRunningTime="2025-12-06 10:31:23.665634576 +0000 UTC m=+12846.067023456" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.594942 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k22ft/must-gather-tjd7k"] Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.597666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.599041 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k22ft"/"kube-root-ca.crt" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.599802 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-k22ft"/"default-dockercfg-8b4cd" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.600027 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k22ft"/"openshift-service-ca.crt" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.625202 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k22ft/must-gather-tjd7k"] Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.755009 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgn65\" (UniqueName: \"kubernetes.io/projected/381f09e1-8cdc-4779-b21b-3fe3605901a1-kube-api-access-jgn65\") pod \"must-gather-tjd7k\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.755093 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/381f09e1-8cdc-4779-b21b-3fe3605901a1-must-gather-output\") pod \"must-gather-tjd7k\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.857211 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgn65\" (UniqueName: \"kubernetes.io/projected/381f09e1-8cdc-4779-b21b-3fe3605901a1-kube-api-access-jgn65\") pod \"must-gather-tjd7k\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.857367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/381f09e1-8cdc-4779-b21b-3fe3605901a1-must-gather-output\") pod \"must-gather-tjd7k\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.858250 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/381f09e1-8cdc-4779-b21b-3fe3605901a1-must-gather-output\") pod \"must-gather-tjd7k\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.881849 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgn65\" (UniqueName: \"kubernetes.io/projected/381f09e1-8cdc-4779-b21b-3fe3605901a1-kube-api-access-jgn65\") pod \"must-gather-tjd7k\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:34 crc kubenswrapper[4895]: I1206 10:32:34.934638 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:32:35 crc kubenswrapper[4895]: W1206 10:32:35.476874 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381f09e1_8cdc_4779_b21b_3fe3605901a1.slice/crio-1c28e84f226aef6b118f2233d90aabb1afaf236c8b5429b509c6dcf23a4e0307 WatchSource:0}: Error finding container 1c28e84f226aef6b118f2233d90aabb1afaf236c8b5429b509c6dcf23a4e0307: Status 404 returned error can't find the container with id 1c28e84f226aef6b118f2233d90aabb1afaf236c8b5429b509c6dcf23a4e0307 Dec 06 10:32:35 crc kubenswrapper[4895]: I1206 10:32:35.481146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k22ft/must-gather-tjd7k"] Dec 06 10:32:35 crc kubenswrapper[4895]: I1206 10:32:35.666165 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/must-gather-tjd7k" event={"ID":"381f09e1-8cdc-4779-b21b-3fe3605901a1","Type":"ContainerStarted","Data":"1c28e84f226aef6b118f2233d90aabb1afaf236c8b5429b509c6dcf23a4e0307"} Dec 06 10:32:40 crc kubenswrapper[4895]: I1206 10:32:40.735743 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/must-gather-tjd7k" event={"ID":"381f09e1-8cdc-4779-b21b-3fe3605901a1","Type":"ContainerStarted","Data":"f732d70219dd6f81feb7afa955d8bb575616e58b231bbfb24cf9bf914b66a5ab"} Dec 06 10:32:40 crc kubenswrapper[4895]: I1206 10:32:40.736296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/must-gather-tjd7k" event={"ID":"381f09e1-8cdc-4779-b21b-3fe3605901a1","Type":"ContainerStarted","Data":"15ce36aa6e851e4f72f93faa37aea3a97837a9a2c4995deae482b38d1e5c7cd6"} Dec 06 10:32:40 crc kubenswrapper[4895]: I1206 10:32:40.772346 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k22ft/must-gather-tjd7k" podStartSLOduration=2.832062504 podStartE2EDuration="6.772313619s" podCreationTimestamp="2025-12-06 10:32:34 +0000 UTC" firstStartedPulling="2025-12-06 10:32:35.482130713 +0000 UTC m=+12917.883519583" lastFinishedPulling="2025-12-06 10:32:39.422381828 +0000 UTC m=+12921.823770698" observedRunningTime="2025-12-06 10:32:40.76125627 +0000 UTC m=+12923.162645170" watchObservedRunningTime="2025-12-06 10:32:40.772313619 +0000 UTC m=+12923.173702499" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.622376 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k22ft/crc-debug-58lrg"] Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.624228 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.670334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a6ca48e-efee-4d67-91f2-361ec966b8f3-host\") pod \"crc-debug-58lrg\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.670380 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whhb\" (UniqueName: \"kubernetes.io/projected/9a6ca48e-efee-4d67-91f2-361ec966b8f3-kube-api-access-5whhb\") pod \"crc-debug-58lrg\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.772805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a6ca48e-efee-4d67-91f2-361ec966b8f3-host\") pod \"crc-debug-58lrg\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.772866 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whhb\" (UniqueName: \"kubernetes.io/projected/9a6ca48e-efee-4d67-91f2-361ec966b8f3-kube-api-access-5whhb\") pod \"crc-debug-58lrg\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.772954 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a6ca48e-efee-4d67-91f2-361ec966b8f3-host\") pod \"crc-debug-58lrg\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.798215 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whhb\" (UniqueName: \"kubernetes.io/projected/9a6ca48e-efee-4d67-91f2-361ec966b8f3-kube-api-access-5whhb\") pod \"crc-debug-58lrg\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: I1206 10:32:44.943641 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:32:44 crc kubenswrapper[4895]: W1206 10:32:44.986245 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6ca48e_efee_4d67_91f2_361ec966b8f3.slice/crio-3b357ba105047e021fa1c6b6d671de11560e384e51af100e39ba5bb197cb4da2 WatchSource:0}: Error finding container 3b357ba105047e021fa1c6b6d671de11560e384e51af100e39ba5bb197cb4da2: Status 404 returned error can't find the container with id 3b357ba105047e021fa1c6b6d671de11560e384e51af100e39ba5bb197cb4da2 Dec 06 10:32:45 crc kubenswrapper[4895]: I1206 10:32:45.789900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-58lrg" event={"ID":"9a6ca48e-efee-4d67-91f2-361ec966b8f3","Type":"ContainerStarted","Data":"3b357ba105047e021fa1c6b6d671de11560e384e51af100e39ba5bb197cb4da2"} Dec 06 10:32:54 crc kubenswrapper[4895]: I1206 10:32:54.912537 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-58lrg" event={"ID":"9a6ca48e-efee-4d67-91f2-361ec966b8f3","Type":"ContainerStarted","Data":"69da1253a38e0961cc45f7a11ede996af385d40f5786338db50941969a045657"} Dec 06 10:32:54 crc kubenswrapper[4895]: I1206 10:32:54.936325 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k22ft/crc-debug-58lrg" podStartSLOduration=1.669843162 podStartE2EDuration="10.936299026s" podCreationTimestamp="2025-12-06 10:32:44 +0000 UTC" firstStartedPulling="2025-12-06 10:32:44.989191633 +0000 UTC m=+12927.390580513" lastFinishedPulling="2025-12-06 10:32:54.255647507 +0000 UTC m=+12936.657036377" observedRunningTime="2025-12-06 10:32:54.925776462 +0000 UTC m=+12937.327165342" watchObservedRunningTime="2025-12-06 10:32:54.936299026 +0000 UTC m=+12937.337687906" Dec 06 10:32:59 crc kubenswrapper[4895]: I1206 10:32:59.695798 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:32:59 crc kubenswrapper[4895]: I1206 10:32:59.696336 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:33:29 crc kubenswrapper[4895]: I1206 10:33:29.696330 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:33:29 crc kubenswrapper[4895]: I1206 10:33:29.696982 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:33:35 crc kubenswrapper[4895]: I1206 10:33:35.437107 4895 generic.go:334] "Generic (PLEG): container finished" podID="9a6ca48e-efee-4d67-91f2-361ec966b8f3" containerID="69da1253a38e0961cc45f7a11ede996af385d40f5786338db50941969a045657" exitCode=0 Dec 06 10:33:35 crc kubenswrapper[4895]: I1206 10:33:35.437186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-58lrg" event={"ID":"9a6ca48e-efee-4d67-91f2-361ec966b8f3","Type":"ContainerDied","Data":"69da1253a38e0961cc45f7a11ede996af385d40f5786338db50941969a045657"} Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.571839 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.640884 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k22ft/crc-debug-58lrg"] Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.658005 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k22ft/crc-debug-58lrg"] Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.756930 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a6ca48e-efee-4d67-91f2-361ec966b8f3-host\") pod \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.757124 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whhb\" (UniqueName: \"kubernetes.io/projected/9a6ca48e-efee-4d67-91f2-361ec966b8f3-kube-api-access-5whhb\") pod \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\" (UID: \"9a6ca48e-efee-4d67-91f2-361ec966b8f3\") " Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.757562 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a6ca48e-efee-4d67-91f2-361ec966b8f3-host" (OuterVolumeSpecName: "host") pod "9a6ca48e-efee-4d67-91f2-361ec966b8f3" (UID: "9a6ca48e-efee-4d67-91f2-361ec966b8f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.758170 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a6ca48e-efee-4d67-91f2-361ec966b8f3-host\") on node \"crc\" DevicePath \"\"" Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.770993 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6ca48e-efee-4d67-91f2-361ec966b8f3-kube-api-access-5whhb" (OuterVolumeSpecName: "kube-api-access-5whhb") pod "9a6ca48e-efee-4d67-91f2-361ec966b8f3" (UID: "9a6ca48e-efee-4d67-91f2-361ec966b8f3"). InnerVolumeSpecName "kube-api-access-5whhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:33:36 crc kubenswrapper[4895]: I1206 10:33:36.859814 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whhb\" (UniqueName: \"kubernetes.io/projected/9a6ca48e-efee-4d67-91f2-361ec966b8f3-kube-api-access-5whhb\") on node \"crc\" DevicePath \"\"" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.461225 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b357ba105047e021fa1c6b6d671de11560e384e51af100e39ba5bb197cb4da2" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.461310 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-58lrg" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.855320 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k22ft/crc-debug-lkrm6"] Dec 06 10:33:37 crc kubenswrapper[4895]: E1206 10:33:37.855862 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6ca48e-efee-4d67-91f2-361ec966b8f3" containerName="container-00" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.855880 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6ca48e-efee-4d67-91f2-361ec966b8f3" containerName="container-00" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.856230 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6ca48e-efee-4d67-91f2-361ec966b8f3" containerName="container-00" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.857135 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.981669 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974cn\" (UniqueName: \"kubernetes.io/projected/a1da5889-f55f-49ab-9a6b-049005f0acb1-kube-api-access-974cn\") pod \"crc-debug-lkrm6\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:37 crc kubenswrapper[4895]: I1206 10:33:37.981889 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1da5889-f55f-49ab-9a6b-049005f0acb1-host\") pod \"crc-debug-lkrm6\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.063243 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6ca48e-efee-4d67-91f2-361ec966b8f3" path="/var/lib/kubelet/pods/9a6ca48e-efee-4d67-91f2-361ec966b8f3/volumes" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.084117 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-974cn\" (UniqueName: \"kubernetes.io/projected/a1da5889-f55f-49ab-9a6b-049005f0acb1-kube-api-access-974cn\") pod \"crc-debug-lkrm6\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.084308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1da5889-f55f-49ab-9a6b-049005f0acb1-host\") pod \"crc-debug-lkrm6\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.084428 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1da5889-f55f-49ab-9a6b-049005f0acb1-host\") pod \"crc-debug-lkrm6\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.110463 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-974cn\" (UniqueName: \"kubernetes.io/projected/a1da5889-f55f-49ab-9a6b-049005f0acb1-kube-api-access-974cn\") pod \"crc-debug-lkrm6\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.179169 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:38 crc kubenswrapper[4895]: I1206 10:33:38.471074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-lkrm6" event={"ID":"a1da5889-f55f-49ab-9a6b-049005f0acb1","Type":"ContainerStarted","Data":"c147bf14551ca4de1a0c040fae015bcf7c1b83eb4671e2436e4fbe97d96008ff"} Dec 06 10:33:39 crc kubenswrapper[4895]: I1206 10:33:39.489295 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1da5889-f55f-49ab-9a6b-049005f0acb1" containerID="46aac2fc45bf7d5f1a013b1abc7e98f5699baaefb6848652ba27eafab7135569" exitCode=0 Dec 06 10:33:39 crc kubenswrapper[4895]: I1206 10:33:39.489401 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-lkrm6" event={"ID":"a1da5889-f55f-49ab-9a6b-049005f0acb1","Type":"ContainerDied","Data":"46aac2fc45bf7d5f1a013b1abc7e98f5699baaefb6848652ba27eafab7135569"} Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.083843 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k22ft/crc-debug-lkrm6"] Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.092912 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k22ft/crc-debug-lkrm6"] Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.645937 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.757348 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1da5889-f55f-49ab-9a6b-049005f0acb1-host\") pod \"a1da5889-f55f-49ab-9a6b-049005f0acb1\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.757639 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-974cn\" (UniqueName: \"kubernetes.io/projected/a1da5889-f55f-49ab-9a6b-049005f0acb1-kube-api-access-974cn\") pod \"a1da5889-f55f-49ab-9a6b-049005f0acb1\" (UID: \"a1da5889-f55f-49ab-9a6b-049005f0acb1\") " Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.757746 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1da5889-f55f-49ab-9a6b-049005f0acb1-host" (OuterVolumeSpecName: "host") pod "a1da5889-f55f-49ab-9a6b-049005f0acb1" (UID: "a1da5889-f55f-49ab-9a6b-049005f0acb1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.758988 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1da5889-f55f-49ab-9a6b-049005f0acb1-host\") on node \"crc\" DevicePath \"\"" Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.765904 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1da5889-f55f-49ab-9a6b-049005f0acb1-kube-api-access-974cn" (OuterVolumeSpecName: "kube-api-access-974cn") pod "a1da5889-f55f-49ab-9a6b-049005f0acb1" (UID: "a1da5889-f55f-49ab-9a6b-049005f0acb1"). InnerVolumeSpecName "kube-api-access-974cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:33:40 crc kubenswrapper[4895]: I1206 10:33:40.862166 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-974cn\" (UniqueName: \"kubernetes.io/projected/a1da5889-f55f-49ab-9a6b-049005f0acb1-kube-api-access-974cn\") on node \"crc\" DevicePath \"\"" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.322102 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k22ft/crc-debug-sdbqt"] Dec 06 10:33:41 crc kubenswrapper[4895]: E1206 10:33:41.322819 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1da5889-f55f-49ab-9a6b-049005f0acb1" containerName="container-00" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.322841 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1da5889-f55f-49ab-9a6b-049005f0acb1" containerName="container-00" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.323138 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1da5889-f55f-49ab-9a6b-049005f0acb1" containerName="container-00" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.323921 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.478130 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhp8\" (UniqueName: \"kubernetes.io/projected/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-kube-api-access-zfhp8\") pod \"crc-debug-sdbqt\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.478469 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-host\") pod \"crc-debug-sdbqt\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.515977 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c147bf14551ca4de1a0c040fae015bcf7c1b83eb4671e2436e4fbe97d96008ff" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.516032 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-lkrm6" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.582598 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhp8\" (UniqueName: \"kubernetes.io/projected/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-kube-api-access-zfhp8\") pod \"crc-debug-sdbqt\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.582744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-host\") pod \"crc-debug-sdbqt\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.584384 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-host\") pod \"crc-debug-sdbqt\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.629618 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhp8\" (UniqueName: \"kubernetes.io/projected/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-kube-api-access-zfhp8\") pod \"crc-debug-sdbqt\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:41 crc kubenswrapper[4895]: I1206 10:33:41.645666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:42 crc kubenswrapper[4895]: I1206 10:33:42.072244 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1da5889-f55f-49ab-9a6b-049005f0acb1" path="/var/lib/kubelet/pods/a1da5889-f55f-49ab-9a6b-049005f0acb1/volumes" Dec 06 10:33:42 crc kubenswrapper[4895]: I1206 10:33:42.533311 4895 generic.go:334] "Generic (PLEG): container finished" podID="d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" containerID="53864cb5228cf54075b610ead17822e226e5b617f8a43c0bcccd8c036eecf24f" exitCode=0 Dec 06 10:33:42 crc kubenswrapper[4895]: I1206 10:33:42.533368 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-sdbqt" event={"ID":"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4","Type":"ContainerDied","Data":"53864cb5228cf54075b610ead17822e226e5b617f8a43c0bcccd8c036eecf24f"} Dec 06 10:33:42 crc kubenswrapper[4895]: I1206 10:33:42.533778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/crc-debug-sdbqt" event={"ID":"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4","Type":"ContainerStarted","Data":"fe6df16f2727a6398651ded49f4bb20cec816d2280b05767460e6abef3e9c604"} Dec 06 10:33:42 crc kubenswrapper[4895]: I1206 10:33:42.602993 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k22ft/crc-debug-sdbqt"] Dec 06 10:33:42 crc kubenswrapper[4895]: I1206 10:33:42.619668 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k22ft/crc-debug-sdbqt"] Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.683961 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.739184 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfhp8\" (UniqueName: \"kubernetes.io/projected/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-kube-api-access-zfhp8\") pod \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.739419 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-host\") pod \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\" (UID: \"d7cd3231-de9c-4e8e-9e3a-27b269aaeff4\") " Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.739567 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-host" (OuterVolumeSpecName: "host") pod "d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" (UID: "d7cd3231-de9c-4e8e-9e3a-27b269aaeff4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.740394 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-host\") on node \"crc\" DevicePath \"\"" Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.747969 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-kube-api-access-zfhp8" (OuterVolumeSpecName: "kube-api-access-zfhp8") pod "d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" (UID: "d7cd3231-de9c-4e8e-9e3a-27b269aaeff4"). InnerVolumeSpecName "kube-api-access-zfhp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:33:43 crc kubenswrapper[4895]: I1206 10:33:43.841424 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfhp8\" (UniqueName: \"kubernetes.io/projected/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4-kube-api-access-zfhp8\") on node \"crc\" DevicePath \"\"" Dec 06 10:33:44 crc kubenswrapper[4895]: I1206 10:33:44.063957 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" path="/var/lib/kubelet/pods/d7cd3231-de9c-4e8e-9e3a-27b269aaeff4/volumes" Dec 06 10:33:44 crc kubenswrapper[4895]: I1206 10:33:44.567903 4895 scope.go:117] "RemoveContainer" containerID="53864cb5228cf54075b610ead17822e226e5b617f8a43c0bcccd8c036eecf24f" Dec 06 10:33:44 crc kubenswrapper[4895]: I1206 10:33:44.567950 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/crc-debug-sdbqt" Dec 06 10:33:59 crc kubenswrapper[4895]: I1206 10:33:59.695795 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:33:59 crc kubenswrapper[4895]: I1206 10:33:59.696585 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:33:59 crc kubenswrapper[4895]: I1206 10:33:59.696656 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:33:59 crc kubenswrapper[4895]: I1206 10:33:59.698102 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:33:59 crc kubenswrapper[4895]: I1206 10:33:59.698224 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" gracePeriod=600 Dec 06 10:33:59 crc kubenswrapper[4895]: E1206 10:33:59.847668 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:34:00 crc kubenswrapper[4895]: I1206 10:34:00.777407 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" exitCode=0 Dec 06 10:34:00 crc kubenswrapper[4895]: I1206 10:34:00.777519 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179"} Dec 06 10:34:00 crc kubenswrapper[4895]: I1206 10:34:00.777759 4895 scope.go:117] "RemoveContainer" containerID="730b3a4444e58f09776a64d590f44494a4d6b1a8ef3b7d600f4ca508d222bf9d" Dec 06 10:34:00 crc kubenswrapper[4895]: I1206 10:34:00.778590 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:34:00 crc kubenswrapper[4895]: E1206 10:34:00.779082 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:34:14 crc kubenswrapper[4895]: I1206 10:34:14.051310 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:34:14 crc kubenswrapper[4895]: E1206 10:34:14.052393 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:34:26 crc kubenswrapper[4895]: I1206 10:34:26.051117 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:34:26 crc kubenswrapper[4895]: E1206 10:34:26.052028 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:34:40 crc kubenswrapper[4895]: I1206 10:34:40.052133 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:34:40 crc kubenswrapper[4895]: E1206 10:34:40.052971 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:34:53 crc kubenswrapper[4895]: I1206 10:34:53.050782 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:34:53 crc kubenswrapper[4895]: E1206 10:34:53.051411 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:35:06 crc kubenswrapper[4895]: I1206 10:35:06.051421 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:35:06 crc kubenswrapper[4895]: E1206 10:35:06.052272 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:35:20 crc kubenswrapper[4895]: I1206 10:35:20.051921 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:35:20 crc kubenswrapper[4895]: E1206 10:35:20.053025 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:35:32 crc kubenswrapper[4895]: I1206 10:35:32.050644 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:35:32 crc kubenswrapper[4895]: E1206 10:35:32.051422 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:35:45 crc kubenswrapper[4895]: I1206 10:35:45.050807 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:35:45 crc kubenswrapper[4895]: E1206 10:35:45.051641 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:35:56 crc kubenswrapper[4895]: I1206 10:35:56.051892 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:35:56 crc kubenswrapper[4895]: E1206 10:35:56.052760 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:36:08 crc kubenswrapper[4895]: I1206 10:36:08.065679 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:36:08 crc kubenswrapper[4895]: E1206 10:36:08.066860 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:36:21 crc kubenswrapper[4895]: I1206 10:36:21.052400 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:36:21 crc kubenswrapper[4895]: E1206 10:36:21.058531 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:36:34 crc kubenswrapper[4895]: I1206 10:36:34.050737 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:36:34 crc kubenswrapper[4895]: E1206 10:36:34.052181 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:36:46 crc kubenswrapper[4895]: I1206 10:36:46.050436 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:36:46 crc kubenswrapper[4895]: E1206 10:36:46.051425 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:36:59 crc kubenswrapper[4895]: I1206 10:36:59.052572 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:36:59 crc kubenswrapper[4895]: E1206 10:36:59.053573 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.400851 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_092f092f-4678-4f00-ab6a-162eed935527/init-config-reloader/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.625216 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_092f092f-4678-4f00-ab6a-162eed935527/alertmanager/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.633595 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_092f092f-4678-4f00-ab6a-162eed935527/init-config-reloader/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.634411 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_092f092f-4678-4f00-ab6a-162eed935527/config-reloader/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.843888 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93/aodh-listener/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.853431 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93/aodh-api/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.868517 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93/aodh-evaluator/0.log" Dec 06 10:37:09 crc kubenswrapper[4895]: I1206 10:37:09.972780 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d0a6ce93-7338-42b9-9bfe-ab2f42fa9d93/aodh-notifier/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.051632 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c84565b98-jp5wp_71304fef-b0a5-465d-9a9b-8eb00d6c0f02/barbican-api-log/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.071605 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c84565b98-jp5wp_71304fef-b0a5-465d-9a9b-8eb00d6c0f02/barbican-api/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.238967 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55f6676c94-q78bk_fbb03056-067e-4d12-9ea3-3133d9ac3220/barbican-keystone-listener/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.453555 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5df995578f-vqnnt_8d9e34be-d2b3-4321-adc5-77ed0d2acfad/barbican-worker-log/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.516616 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5df995578f-vqnnt_8d9e34be-d2b3-4321-adc5-77ed0d2acfad/barbican-worker/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.791235 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-rj7g6_066e35d1-3c0e-481c-aa9b-40a41fd85835/bootstrap-openstack-openstack-cell1/0.log" Dec 06 10:37:10 crc kubenswrapper[4895]: I1206 10:37:10.950536 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-networker-m45sx_079ac743-75e7-470d-84b6-f5d38ee111f9/bootstrap-openstack-openstack-networker/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.025608 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_61388f73-0de5-4805-8ba7-4b683db03bdb/ceilometer-central-agent/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.043407 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55f6676c94-q78bk_fbb03056-067e-4d12-9ea3-3133d9ac3220/barbican-keystone-listener-log/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.183062 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_61388f73-0de5-4805-8ba7-4b683db03bdb/ceilometer-notification-agent/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.238651 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_61388f73-0de5-4805-8ba7-4b683db03bdb/sg-core/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.282018 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_61388f73-0de5-4805-8ba7-4b683db03bdb/proxy-httpd/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.373012 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-9b6hl_921a9de4-8fb0-4bd2-a012-26a74c11c465/ceph-client-openstack-openstack-cell1/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.835319 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3/cinder-api-log/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.835887 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8b0dc8de-a6d9-4437-9b8c-390ebe0b54f3/cinder-api/0.log" Dec 06 10:37:11 crc kubenswrapper[4895]: I1206 10:37:11.988220 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786/probe/0.log" Dec 06 10:37:12 crc kubenswrapper[4895]: I1206 10:37:12.050835 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:37:12 crc kubenswrapper[4895]: E1206 10:37:12.051128 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:37:12 crc kubenswrapper[4895]: I1206 10:37:12.111974 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56/cinder-scheduler/0.log" Dec 06 10:37:12 crc kubenswrapper[4895]: I1206 10:37:12.295891 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bfac2dce-66e1-4cf6-a32b-bcf8d6ad7f56/probe/0.log" Dec 06 10:37:12 crc kubenswrapper[4895]: I1206 10:37:12.625448 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d209ea91-858a-4a98-8d51-743b79811346/probe/0.log" Dec 06 10:37:12 crc kubenswrapper[4895]: I1206 10:37:12.880600 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-tmht6_46f96e8a-cddb-4908-8ced-54dd4fcb7731/configure-network-openstack-openstack-cell1/0.log" Dec 06 10:37:13 crc kubenswrapper[4895]: I1206 10:37:13.121426 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-networker-7d5jm_cfc5ef76-e849-4740-93de-d9490d688654/configure-network-openstack-openstack-networker/0.log" Dec 06 10:37:13 crc kubenswrapper[4895]: I1206 10:37:13.195706 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-v76sx_5b7e3a78-9b6a-4b72-a42d-7226a4590eaf/configure-os-openstack-openstack-cell1/0.log" Dec 06 10:37:13 crc kubenswrapper[4895]: I1206 10:37:13.419199 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-networker-l8jm8_9107701d-0557-4444-8997-8c70c8879415/configure-os-openstack-openstack-networker/0.log" Dec 06 10:37:13 crc kubenswrapper[4895]: I1206 10:37:13.868989 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8db9b89b7-w5g8t_7e30aff9-56a4-49d8-84f6-f3a22994eff5/init/0.log" Dec 06 10:37:13 crc kubenswrapper[4895]: I1206 10:37:13.956402 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8db9b89b7-w5g8t_7e30aff9-56a4-49d8-84f6-f3a22994eff5/init/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.146090 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8db9b89b7-w5g8t_7e30aff9-56a4-49d8-84f6-f3a22994eff5/dnsmasq-dns/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.167520 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-gwgt2_d5f603c7-e31a-4a5c-b029-986666a34609/download-cache-openstack-openstack-cell1/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.377339 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-networker-gt54m_2f00456a-cbda-45c8-a825-4f449b138336/download-cache-openstack-openstack-networker/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.577687 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_37f3bb32-a888-4492-ae83-1e0302694950/glance-log/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.582348 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_37f3bb32-a888-4492-ae83-1e0302694950/glance-httpd/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.822399 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_00465a36-698f-4971-ae7b-8f4c38423896/glance-httpd/0.log" Dec 06 10:37:14 crc kubenswrapper[4895]: I1206 10:37:14.837401 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_00465a36-698f-4971-ae7b-8f4c38423896/glance-log/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.247569 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d209ea91-858a-4a98-8d51-743b79811346/cinder-volume/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.285226 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-78fbc99ff7-r9c56_32182461-265a-44a6-8003-c2bb7786b8a1/heat-api/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.384189 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-878599567-v42s8_050c3eed-05e1-4aa1-b309-7ac6b68389e8/heat-cfnapi/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.432338 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-d989c6c78-ls6jf_7b9ac127-3b4b-4c4e-a6be-57921f06f84b/heat-engine/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.652291 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7c89759895-d7j8d_1ca26141-3036-4a4c-896d-671c9fc24037/horizon/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.758118 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-khl26_eb2692b9-1e20-4d6f-87bc-b69888745b52/install-certs-openstack-openstack-cell1/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.811511 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7c89759895-d7j8d_1ca26141-3036-4a4c-896d-671c9fc24037/horizon-log/0.log" Dec 06 10:37:15 crc kubenswrapper[4895]: I1206 10:37:15.857313 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-networker-64l8l_67c7ea3d-195b-4c75-9f33-ff6bc417147f/install-certs-openstack-openstack-networker/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.010237 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-tp2cg_519f3967-5e5a-4330-91f0-95a92fd3de83/install-os-openstack-openstack-cell1/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.079542 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-networker-db6d6_6ba7963e-83f5-4e85-befa-41d58e25787d/install-os-openstack-openstack-networker/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.264035 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4e7ecb7a-ac0e-4762-abb8-9da9d6bcf786/cinder-backup/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.384908 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416861-gzlpw_6bb414b1-e3dc-4687-b46c-b484c476743b/keystone-cron/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.489241 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416921-v5lvt_61170183-aae8-4399-957f-1f5f07320807/keystone-cron/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.663790 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_80d4dc9d-aaf8-4759-9d8e-67539ecf21f2/kube-state-metrics/0.log" Dec 06 10:37:16 crc kubenswrapper[4895]: I1206 10:37:16.865937 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-m2m7q_64076080-572e-4d67-af02-2cdeb8113b9f/libvirt-openstack-openstack-cell1/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.230398 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_742d4a1b-1ac4-46ad-9cff-53f42c45e3f5/manila-scheduler/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.250658 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_742d4a1b-1ac4-46ad-9cff-53f42c45e3f5/probe/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.266361 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c7c85b544-hxwl4_cd6690ad-c4bc-4076-9233-e2fbdc519ae1/keystone-api/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.311234 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7f19d971-38ba-4e49-a099-6b657324d62e/manila-api/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.422095 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7f19d971-38ba-4e49-a099-6b657324d62e/manila-api-log/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.493057 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7cf8e2a3-dacb-435a-8869-fcd5949b6299/probe/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.523283 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7cf8e2a3-dacb-435a-8869-fcd5949b6299/manila-share/0.log" Dec 06 10:37:17 crc kubenswrapper[4895]: I1206 10:37:17.977835 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-9cvq4_4cf85c53-e5bd-4a7e-ab23-468ca08d317b/neutron-dhcp-openstack-openstack-cell1/0.log" Dec 06 10:37:18 crc kubenswrapper[4895]: I1206 10:37:18.073405 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74b4d66fbf-fvwqh_89893123-c233-4bec-9663-74645c53e8a8/neutron-httpd/0.log" Dec 06 10:37:18 crc kubenswrapper[4895]: I1206 10:37:18.282391 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-8vldk_b26c3808-8042-49ae-a734-689ec87ec5ed/neutron-metadata-openstack-openstack-cell1/0.log" Dec 06 10:37:18 crc kubenswrapper[4895]: I1206 10:37:18.527970 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-networker-47bmh_bdeb0fdb-eec6-4566-8c47-fbeda5f3c899/neutron-metadata-openstack-openstack-networker/0.log" Dec 06 10:37:18 crc kubenswrapper[4895]: I1206 10:37:18.540705 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74b4d66fbf-fvwqh_89893123-c233-4bec-9663-74645c53e8a8/neutron-api/0.log" Dec 06 10:37:18 crc kubenswrapper[4895]: I1206 10:37:18.641287 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-7vxxz_69859eaf-ab5c-4894-b875-962b9642c277/neutron-sriov-openstack-openstack-cell1/0.log" Dec 06 10:37:18 crc kubenswrapper[4895]: I1206 10:37:18.996416 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_358d73a2-1190-44fe-8154-3713df01a941/nova-api-api/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.151167 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_13ad9c9b-8cea-4d11-accc-ac05fd63b14f/nova-cell0-conductor-conductor/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.196602 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_358d73a2-1190-44fe-8154-3713df01a941/nova-api-log/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.272848 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2aa4cb5a-ff24-48db-8689-e1e6703b690e/nova-cell1-conductor-conductor/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.544269 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0835d72d-0e88-4dad-811c-a8a6dd197975/nova-cell1-novncproxy-novncproxy/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.610998 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell9fkwj_aa8468c6-6c07-40e9-aa1b-996f099dffa8/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.754843 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-gq6jn_17bdd4b3-f731-4190-9958-689486a88f30/nova-cell1-openstack-openstack-cell1/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.828787 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa851486-91c2-4c06-9abc-4021ba5a4fd9/nova-metadata-log/0.log" Dec 06 10:37:19 crc kubenswrapper[4895]: I1206 10:37:19.945304 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa851486-91c2-4c06-9abc-4021ba5a4fd9/nova-metadata-metadata/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.175667 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e55923ef-c925-48ba-b959-a7a1d5212e60/nova-scheduler-scheduler/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.370335 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e7a03794-5321-4551-934e-bcf31316d825/mysql-bootstrap/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.591794 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b3959bd-4eca-4e06-b552-7217aa74f883/mysql-bootstrap/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.609115 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e7a03794-5321-4551-934e-bcf31316d825/galera/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.651435 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e7a03794-5321-4551-934e-bcf31316d825/mysql-bootstrap/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.829376 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b3959bd-4eca-4e06-b552-7217aa74f883/mysql-bootstrap/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.875413 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_22156626-bc35-4e1c-969c-7cdc2a169cb9/openstackclient/0.log" Dec 06 10:37:20 crc kubenswrapper[4895]: I1206 10:37:20.936686 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b3959bd-4eca-4e06-b552-7217aa74f883/galera/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.073917 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c244e581-3e70-4efe-84b5-3f41fb9fdaa0/openstack-network-exporter/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.163087 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c244e581-3e70-4efe-84b5-3f41fb9fdaa0/ovn-northd/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.308482 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-xbcjg_8b86acff-6f1c-4129-819e-5bd2cd6b3c83/ovn-openstack-openstack-cell1/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.509688 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2900dffc-9406-4d7c-861c-c22163ddee06/openstack-network-exporter/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.519532 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-networker-8gtcz_ac1d9533-8ad3-478c-a262-37339034bc89/ovn-openstack-openstack-networker/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.544118 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2900dffc-9406-4d7c-861c-c22163ddee06/ovsdbserver-nb/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.695197 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_fcb2414c-554f-4458-b0ae-aa4b7e928c7f/openstack-network-exporter/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.733989 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_fcb2414c-554f-4458-b0ae-aa4b7e928c7f/ovsdbserver-nb/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.943666 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_917ae70f-856c-4a47-847b-bd0775476f16/openstack-network-exporter/0.log" Dec 06 10:37:21 crc kubenswrapper[4895]: I1206 10:37:21.976902 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_917ae70f-856c-4a47-847b-bd0775476f16/ovsdbserver-nb/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.067989 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9144e817-5aaa-4369-83e3-eca7e57c2b4a/openstack-network-exporter/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.187413 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9144e817-5aaa-4369-83e3-eca7e57c2b4a/ovsdbserver-sb/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.274191 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_00e078e3-f605-4323-9a64-9868070a17ae/ovsdbserver-sb/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.275657 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_00e078e3-f605-4323-9a64-9868070a17ae/openstack-network-exporter/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.451940 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_f03dfbd7-16e7-4669-9372-36f6adba5fab/openstack-network-exporter/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.459416 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_f03dfbd7-16e7-4669-9372-36f6adba5fab/ovsdbserver-sb/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.795288 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cjq5rj_0ccc31a6-65f8-4e38-b5c2-6d817a8508f8/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.844511 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77b5c8f5cb-5gctx_58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e/placement-api/0.log" Dec 06 10:37:22 crc kubenswrapper[4895]: I1206 10:37:22.902134 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77b5c8f5cb-5gctx_58c16e8e-bcc4-4467-a3a1-1a3e8131ba8e/placement-log/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.008505 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-n2q64m_cfd809c7-a514-4771-bd5c-1e327cddfd8a/pre-adoption-validation-openstack-pre-adoption-openstack-networ/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.050243 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:37:23 crc kubenswrapper[4895]: E1206 10:37:23.050519 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.138755 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_57edb652-7803-4ccb-8c17-68623b1b3e6f/init-config-reloader/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.305160 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_57edb652-7803-4ccb-8c17-68623b1b3e6f/init-config-reloader/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.329061 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_57edb652-7803-4ccb-8c17-68623b1b3e6f/config-reloader/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.361934 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_57edb652-7803-4ccb-8c17-68623b1b3e6f/thanos-sidecar/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.372404 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_57edb652-7803-4ccb-8c17-68623b1b3e6f/prometheus/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.521994 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_996212ae-f3a7-4f9d-ade6-6f82051b6561/setup-container/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.701451 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_996212ae-f3a7-4f9d-ade6-6f82051b6561/setup-container/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.768711 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_27ea9905-46c5-48e1-a558-7c8e87a4cea7/setup-container/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.806373 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_996212ae-f3a7-4f9d-ade6-6f82051b6561/rabbitmq/0.log" Dec 06 10:37:23 crc kubenswrapper[4895]: I1206 10:37:23.935184 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_27ea9905-46c5-48e1-a558-7c8e87a4cea7/setup-container/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.064692 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-npp4n_7493d5f0-e00c-4ead-ac85-02955a68017b/reboot-os-openstack-openstack-cell1/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.160574 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_27ea9905-46c5-48e1-a558-7c8e87a4cea7/rabbitmq/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.224290 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-networker-qwwl7_0f8fba0d-c7d0-44a9-b9fe-fe62d8fa87a5/reboot-os-openstack-openstack-networker/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.421292 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-5mxxx_7c2ff2d2-786f-4187-b35f-ba09d517cbf8/run-os-openstack-openstack-cell1/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.513845 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-networker-5sd8x_caf73cd5-5852-49e3-b5b0-4330d89bb2e3/run-os-openstack-openstack-networker/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.737949 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-dqx6d_6bbe868c-9365-4fea-bb01-264ae7f6e04a/ssh-known-hosts-openstack/0.log" Dec 06 10:37:24 crc kubenswrapper[4895]: I1206 10:37:24.856999 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-k85hm_b7d07278-8ff8-402b-8a7b-b2d05efc68fd/telemetry-openstack-openstack-cell1/0.log" Dec 06 10:37:25 crc kubenswrapper[4895]: I1206 10:37:25.040366 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d4338c71-33b3-405a-a259-0258c0836bb8/test-operator-logs-container/0.log" Dec 06 10:37:25 crc kubenswrapper[4895]: I1206 10:37:25.079399 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6eaee9a4-ba6d-4285-823c-f90a59785cc6/tempest-tests-tempest-tests-runner/0.log" Dec 06 10:37:25 crc kubenswrapper[4895]: I1206 10:37:25.414202 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-2pwvv_710bdda9-c040-4731-b0cf-dce648cb6c9e/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Dec 06 10:37:25 crc kubenswrapper[4895]: I1206 10:37:25.442967 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-networker-tnfzv_7108ac74-5da3-451e-811b-384e786863ec/tripleo-cleanup-tripleo-cleanup-openstack-networker/0.log" Dec 06 10:37:25 crc kubenswrapper[4895]: I1206 10:37:25.592782 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-bbj9x_f4f06381-fe11-443f-a2cd-5f4dd0b39394/validate-network-openstack-openstack-cell1/0.log" Dec 06 10:37:25 crc kubenswrapper[4895]: I1206 10:37:25.715707 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-networker-mxpq9_d70176ac-e248-435a-a834-1558c9f382d2/validate-network-openstack-openstack-networker/0.log" Dec 06 10:37:35 crc kubenswrapper[4895]: I1206 10:37:35.050853 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:37:35 crc kubenswrapper[4895]: E1206 10:37:35.051698 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:37:40 crc kubenswrapper[4895]: I1206 10:37:40.019749 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e1ee1786-5679-4e5b-ab42-d828e0b148a6/memcached/0.log" Dec 06 10:37:48 crc kubenswrapper[4895]: I1206 10:37:48.065610 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:37:48 crc kubenswrapper[4895]: E1206 10:37:48.066772 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.527221 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/util/0.log" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.706141 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/pull/0.log" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.711711 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/util/0.log" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.732713 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/pull/0.log" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.940035 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/util/0.log" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.958851 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/extract/0.log" Dec 06 10:37:50 crc kubenswrapper[4895]: I1206 10:37:50.984415 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafr7sls_8dbcdcb9-6ac1-4996-9a4c-d03743ee0c13/pull/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.104803 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-bc6fp_109a952b-18eb-4217-884d-f40b3be18878/kube-rbac-proxy/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.207846 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qk4xg_5417e33f-dead-459e-933b-58ad3ae2da48/kube-rbac-proxy/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.263010 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-bc6fp_109a952b-18eb-4217-884d-f40b3be18878/manager/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.437334 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-5d9nk_f509e9a0-673f-45ba-a4f5-f3f5834ac86a/kube-rbac-proxy/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.800112 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-4p4x6_e8a69b24-8304-4447-b76f-e98e93cb7715/kube-rbac-proxy/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.820001 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-5d9nk_f509e9a0-673f-45ba-a4f5-f3f5834ac86a/manager/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.931535 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qk4xg_5417e33f-dead-459e-933b-58ad3ae2da48/manager/0.log" Dec 06 10:37:51 crc kubenswrapper[4895]: I1206 10:37:51.958716 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-4p4x6_e8a69b24-8304-4447-b76f-e98e93cb7715/manager/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.091293 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-8m9qs_abcee2d9-1cac-4e62-88a6-79b249832e9b/kube-rbac-proxy/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.198750 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-8m9qs_abcee2d9-1cac-4e62-88a6-79b249832e9b/manager/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.227842 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-rt4b2_e46a2036-66cd-420c-9920-a3e8ef0e17df/kube-rbac-proxy/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.280228 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-rt4b2_e46a2036-66cd-420c-9920-a3e8ef0e17df/manager/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.371207 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-6cv59_512638f7-8e17-493b-a34b-3da3c65f445a/kube-rbac-proxy/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.545208 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f6j2r_68495243-fc02-458e-af78-61702a2dda83/kube-rbac-proxy/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.713398 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f6j2r_68495243-fc02-458e-af78-61702a2dda83/manager/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.742589 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-6cv59_512638f7-8e17-493b-a34b-3da3c65f445a/manager/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.781203 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-d65l6_8e9001cb-7a62-4617-8143-f4a51ad1c13a/kube-rbac-proxy/0.log" Dec 06 10:37:52 crc kubenswrapper[4895]: I1206 10:37:52.952622 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cgv2z_0d925507-837e-438e-8f19-34c15b8b208e/kube-rbac-proxy/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.015861 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-d65l6_8e9001cb-7a62-4617-8143-f4a51ad1c13a/manager/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.041489 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cgv2z_0d925507-837e-438e-8f19-34c15b8b208e/manager/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.171950 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2lbh2_d61c0a11-5736-4747-889a-6dd520cbe269/kube-rbac-proxy/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.256748 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2lbh2_d61c0a11-5736-4747-889a-6dd520cbe269/manager/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.418356 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-9r6m4_d11ece89-3325-4b95-aac8-776e2eaffecb/kube-rbac-proxy/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.440899 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-9r6m4_d11ece89-3325-4b95-aac8-776e2eaffecb/manager/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.543089 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-dmspk_05bfd83c-3643-4dc8-bd25-2204bc8bc8f6/kube-rbac-proxy/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.732994 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-tbd9g_88ab7b06-3be3-44a1-acbf-8ba5ced20251/kube-rbac-proxy/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.837284 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-dmspk_05bfd83c-3643-4dc8-bd25-2204bc8bc8f6/manager/0.log" Dec 06 10:37:53 crc kubenswrapper[4895]: I1206 10:37:53.853976 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-tbd9g_88ab7b06-3be3-44a1-acbf-8ba5ced20251/manager/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.000136 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55c85496f522sgf_a8ffcb7e-0e4b-42c3-b778-4706cbd59792/kube-rbac-proxy/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.049840 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55c85496f522sgf_a8ffcb7e-0e4b-42c3-b778-4706cbd59792/manager/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.463279 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-55b6fb9447-lclnz_e996e45e-f1d8-41c8-9133-b0189b0025fc/operator/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.630533 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-dk54h_c08a88ee-75c4-450b-8cc0-6159127f6a8c/kube-rbac-proxy/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.786012 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-dk54h_c08a88ee-75c4-450b-8cc0-6159127f6a8c/manager/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.855430 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-gv8vt_98110cff-712b-414c-9965-14d895f4b99f/kube-rbac-proxy/0.log" Dec 06 10:37:54 crc kubenswrapper[4895]: I1206 10:37:54.942815 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6bvww_c9cd8e78-2424-4824-a38d-8bf32c3c1fb3/registry-server/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.061569 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-gv8vt_98110cff-712b-414c-9965-14d895f4b99f/manager/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.179757 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-czmdp_d5cafedb-1052-4cd3-9212-3f642e07c18d/operator/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.271752 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-8tr76_fa381d85-af76-4af0-a49a-1722c746f7c2/kube-rbac-proxy/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.366567 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-8tr76_fa381d85-af76-4af0-a49a-1722c746f7c2/manager/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.459070 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-dq42h_3e2fc835-9cf2-4e21-a1fc-d76cfafba632/kube-rbac-proxy/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.642230 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-rph9c_5fb63748-1c10-4a17-9dcc-862fc1b29b46/kube-rbac-proxy/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.705633 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-rph9c_5fb63748-1c10-4a17-9dcc-862fc1b29b46/manager/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.814185 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-dq42h_3e2fc835-9cf2-4e21-a1fc-d76cfafba632/manager/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.908277 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-gzb86_49bde36b-bda3-4622-87b2-6df2a2bee7f7/manager/0.log" Dec 06 10:37:55 crc kubenswrapper[4895]: I1206 10:37:55.968144 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-gzb86_49bde36b-bda3-4622-87b2-6df2a2bee7f7/kube-rbac-proxy/0.log" Dec 06 10:37:57 crc kubenswrapper[4895]: I1206 10:37:57.126652 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54bdf956c4-6h5v6_14ae4729-3f50-4990-9b10-8a06e7e78060/manager/0.log" Dec 06 10:37:59 crc kubenswrapper[4895]: I1206 10:37:59.051278 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:37:59 crc kubenswrapper[4895]: E1206 10:37:59.051728 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:38:11 crc kubenswrapper[4895]: I1206 10:38:11.051403 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:38:11 crc kubenswrapper[4895]: E1206 10:38:11.052540 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:38:17 crc kubenswrapper[4895]: I1206 10:38:17.249085 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2f6hz_fcc5343c-2ae2-4edf-b975-96cc492ca434/kube-rbac-proxy/0.log" Dec 06 10:38:17 crc kubenswrapper[4895]: I1206 10:38:17.249852 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9mczr_a3ce3943-7a02-46e7-bf84-30d30080b111/control-plane-machine-set-operator/0.log" Dec 06 10:38:17 crc kubenswrapper[4895]: I1206 10:38:17.250112 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2f6hz_fcc5343c-2ae2-4edf-b975-96cc492ca434/machine-api-operator/0.log" Dec 06 10:38:24 crc kubenswrapper[4895]: I1206 10:38:24.052281 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:38:24 crc kubenswrapper[4895]: E1206 10:38:24.052930 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:38:29 crc kubenswrapper[4895]: I1206 10:38:29.991083 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-hk759_fb13f97d-48b1-4846-854c-d0af6ae35951/cert-manager-controller/0.log" Dec 06 10:38:30 crc kubenswrapper[4895]: I1206 10:38:30.131922 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-mf96z_f600c442-f2c6-4b75-9945-def8b809dcb4/cert-manager-cainjector/0.log" Dec 06 10:38:30 crc kubenswrapper[4895]: I1206 10:38:30.170372 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-lbntl_b68c3ddb-a16d-4c76-bd50-b8117170b7a7/cert-manager-webhook/0.log" Dec 06 10:38:37 crc kubenswrapper[4895]: I1206 10:38:37.051186 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:38:37 crc kubenswrapper[4895]: E1206 10:38:37.052284 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:38:43 crc kubenswrapper[4895]: I1206 10:38:43.129401 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-xnpxm_f333f961-aeed-4f1c-9e25-cc50d6ace30f/nmstate-console-plugin/0.log" Dec 06 10:38:43 crc kubenswrapper[4895]: I1206 10:38:43.348171 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ddvxz_a15dc415-ffeb-45f0-b298-9ac866573b57/kube-rbac-proxy/0.log" Dec 06 10:38:43 crc kubenswrapper[4895]: I1206 10:38:43.380112 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ddvxz_a15dc415-ffeb-45f0-b298-9ac866573b57/nmstate-metrics/0.log" Dec 06 10:38:43 crc kubenswrapper[4895]: I1206 10:38:43.388453 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vsvgr_c397bd3c-149a-4d07-94ee-053ad003b83c/nmstate-handler/0.log" Dec 06 10:38:43 crc kubenswrapper[4895]: I1206 10:38:43.668698 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-56vwb_c920fe00-1363-44b7-8830-15e9df2f685a/nmstate-webhook/0.log" Dec 06 10:38:43 crc kubenswrapper[4895]: I1206 10:38:43.673840 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-btgzr_2b98ee67-9fd9-4fad-93a1-93d46ba12549/nmstate-operator/0.log" Dec 06 10:38:49 crc kubenswrapper[4895]: I1206 10:38:49.050907 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:38:49 crc kubenswrapper[4895]: E1206 10:38:49.053209 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:38:59 crc kubenswrapper[4895]: I1206 10:38:59.989402 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cz9xl_5e0bbbc9-5a2e-4e78-917b-e0ac820395fb/kube-rbac-proxy/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.252105 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-t5mgh_26238b40-5288-4a03-80b8-a3400c9f5365/frr-k8s-webhook-server/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.476304 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cz9xl_5e0bbbc9-5a2e-4e78-917b-e0ac820395fb/controller/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.480733 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-frr-files/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.680004 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-reloader/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.680064 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-metrics/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.695274 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-reloader/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.700063 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-frr-files/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.925563 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-metrics/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.930336 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-reloader/0.log" Dec 06 10:39:00 crc kubenswrapper[4895]: I1206 10:39:00.992030 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-frr-files/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.033410 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-metrics/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.138831 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-frr-files/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.148967 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-reloader/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.231629 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/cp-metrics/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.323800 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/controller/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.422214 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/frr-metrics/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.537838 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/kube-rbac-proxy/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.858782 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/kube-rbac-proxy-frr/0.log" Dec 06 10:39:01 crc kubenswrapper[4895]: I1206 10:39:01.939122 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/reloader/0.log" Dec 06 10:39:02 crc kubenswrapper[4895]: I1206 10:39:02.146818 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78ffd896db-f79hf_6fab2ce9-306f-4230-ab6a-be99e37aaeea/manager/0.log" Dec 06 10:39:02 crc kubenswrapper[4895]: I1206 10:39:02.201603 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55976579dc-68gpl_20e10bde-64c1-402d-914e-2bfeef28267e/webhook-server/0.log" Dec 06 10:39:02 crc kubenswrapper[4895]: I1206 10:39:02.431143 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xmz5r_eafcf625-ed1f-442a-8bf3-b1b6c231d811/kube-rbac-proxy/0.log" Dec 06 10:39:03 crc kubenswrapper[4895]: I1206 10:39:03.050754 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:39:03 crc kubenswrapper[4895]: I1206 10:39:03.281140 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xmz5r_eafcf625-ed1f-442a-8bf3-b1b6c231d811/speaker/0.log" Dec 06 10:39:03 crc kubenswrapper[4895]: I1206 10:39:03.629501 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"98d264d8c0891dde4f09d3f42c01be9800a1667f6c1d339c1fd1396bc650c968"} Dec 06 10:39:05 crc kubenswrapper[4895]: I1206 10:39:05.085933 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zfwsz_69aac7da-152a-4314-92fd-1f4aea0140be/frr/0.log" Dec 06 10:39:17 crc kubenswrapper[4895]: I1206 10:39:17.865566 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/util/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.100998 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/util/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.123005 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/pull/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.133462 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/pull/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.286444 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/util/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.310525 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/extract/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.408988 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5vp6n_149dfbf8-e301-4bee-b295-7fd74dfd4df1/pull/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.566690 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/util/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.751033 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/util/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.765567 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/pull/0.log" Dec 06 10:39:18 crc kubenswrapper[4895]: I1206 10:39:18.791627 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/pull/0.log" Dec 06 10:39:19 crc kubenswrapper[4895]: I1206 10:39:19.003463 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/pull/0.log" Dec 06 10:39:19 crc kubenswrapper[4895]: I1206 10:39:19.056526 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/util/0.log" Dec 06 10:39:19 crc kubenswrapper[4895]: I1206 10:39:19.104936 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fxr9rs_0dd66b27-5979-489a-8356-cb6d42b23c3a/extract/0.log" Dec 06 10:39:19 crc kubenswrapper[4895]: I1206 10:39:19.207166 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/util/0.log" Dec 06 10:39:20 crc kubenswrapper[4895]: I1206 10:39:20.741156 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/pull/0.log" Dec 06 10:39:20 crc kubenswrapper[4895]: I1206 10:39:20.741305 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/pull/0.log" Dec 06 10:39:20 crc kubenswrapper[4895]: I1206 10:39:20.948186 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/util/0.log" Dec 06 10:39:20 crc kubenswrapper[4895]: I1206 10:39:20.961641 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/util/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.020422 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/extract/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.306695 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109knlf_53a6fc52-024b-4b34-9bdb-da4207dd83d6/pull/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.394386 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/util/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.394434 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/util/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.446164 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/pull/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.596529 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/pull/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.852281 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/extract/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.856967 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/util/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.872671 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838hbch_86d20cc9-915a-4f28-802c-c2d656de5763/pull/0.log" Dec 06 10:39:21 crc kubenswrapper[4895]: I1206 10:39:21.982866 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/extract-utilities/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.156962 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/extract-utilities/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.159548 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/extract-content/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.217605 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/extract-content/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.428253 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/extract-content/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.462521 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/extract-utilities/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.466368 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/extract-utilities/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.714588 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/extract-content/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.716446 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/extract-utilities/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.762934 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/extract-content/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.972519 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/extract-content/0.log" Dec 06 10:39:22 crc kubenswrapper[4895]: I1206 10:39:22.994362 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/extract-utilities/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.233536 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hn9n9_9b65f0c8-5905-4b80-a9c8-1704be25ec8e/marketplace-operator/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.238455 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/extract-utilities/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.469920 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/extract-utilities/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.484700 4895 scope.go:117] "RemoveContainer" containerID="69da1253a38e0961cc45f7a11ede996af385d40f5786338db50941969a045657" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.508966 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/extract-content/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.586958 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/extract-content/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.786307 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/extract-utilities/0.log" Dec 06 10:39:23 crc kubenswrapper[4895]: I1206 10:39:23.834031 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/extract-content/0.log" Dec 06 10:39:24 crc kubenswrapper[4895]: I1206 10:39:24.084097 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/extract-utilities/0.log" Dec 06 10:39:24 crc kubenswrapper[4895]: I1206 10:39:24.355508 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/extract-utilities/0.log" Dec 06 10:39:24 crc kubenswrapper[4895]: I1206 10:39:24.402469 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/extract-content/0.log" Dec 06 10:39:24 crc kubenswrapper[4895]: I1206 10:39:24.538686 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/extract-content/0.log" Dec 06 10:39:24 crc kubenswrapper[4895]: I1206 10:39:24.771845 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/extract-utilities/0.log" Dec 06 10:39:24 crc kubenswrapper[4895]: I1206 10:39:24.862694 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/extract-content/0.log" Dec 06 10:39:25 crc kubenswrapper[4895]: I1206 10:39:25.063572 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zms68_6bbad132-928f-4f02-bbfc-a6b66eeec395/registry-server/0.log" Dec 06 10:39:25 crc kubenswrapper[4895]: I1206 10:39:25.542188 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s8xtq_2e3eb585-aee8-4711-bc51-89b9a358f003/registry-server/0.log" Dec 06 10:39:26 crc kubenswrapper[4895]: I1206 10:39:26.210614 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d8pr_4ed02d8a-6fba-458b-bab9-e595922d1f1f/registry-server/0.log" Dec 06 10:39:27 crc kubenswrapper[4895]: I1206 10:39:27.161465 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wkv7v_2611889f-6582-4711-8d3a-c93dd57ba6fc/registry-server/0.log" Dec 06 10:39:39 crc kubenswrapper[4895]: I1206 10:39:39.348600 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-8ffvz_6a06d981-e38f-4b27-b597-271076759c4b/prometheus-operator/0.log" Dec 06 10:39:39 crc kubenswrapper[4895]: I1206 10:39:39.568624 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7789df6f88-6szsx_7d8afee3-d9f0-42ac-b2e9-89472dfec610/prometheus-operator-admission-webhook/0.log" Dec 06 10:39:39 crc kubenswrapper[4895]: I1206 10:39:39.640436 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7789df6f88-z5j98_542a8fcd-d1af-493c-ae31-c370a4f5d38c/prometheus-operator-admission-webhook/0.log" Dec 06 10:39:39 crc kubenswrapper[4895]: I1206 10:39:39.829897 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-fmg6b_b3cdf03d-0a52-43df-a589-7312ba4056ed/operator/0.log" Dec 06 10:39:39 crc kubenswrapper[4895]: I1206 10:39:39.924091 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-btk6w_7c9b65d4-80d1-423e-8a9f-0786e18d0b00/perses-operator/0.log" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.644196 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mb5hw"] Dec 06 10:40:13 crc kubenswrapper[4895]: E1206 10:40:13.645700 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" containerName="container-00" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.645742 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" containerName="container-00" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.646278 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7cd3231-de9c-4e8e-9e3a-27b269aaeff4" containerName="container-00" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.652727 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.658186 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb5hw"] Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.730901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqvfv\" (UniqueName: \"kubernetes.io/projected/69551e4d-b70d-4aa6-9c38-49172a3c0e36-kube-api-access-vqvfv\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.731009 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-catalog-content\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.731087 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-utilities\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.833587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqvfv\" (UniqueName: \"kubernetes.io/projected/69551e4d-b70d-4aa6-9c38-49172a3c0e36-kube-api-access-vqvfv\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.833687 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-catalog-content\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.833766 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-utilities\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.834414 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-utilities\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.835050 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-catalog-content\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.856315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqvfv\" (UniqueName: \"kubernetes.io/projected/69551e4d-b70d-4aa6-9c38-49172a3c0e36-kube-api-access-vqvfv\") pod \"redhat-marketplace-mb5hw\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:13 crc kubenswrapper[4895]: I1206 10:40:13.982539 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:14 crc kubenswrapper[4895]: I1206 10:40:14.516934 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb5hw"] Dec 06 10:40:15 crc kubenswrapper[4895]: I1206 10:40:15.439605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb5hw" event={"ID":"69551e4d-b70d-4aa6-9c38-49172a3c0e36","Type":"ContainerStarted","Data":"fe0f7c52e48e83c48b004793e9bcc7e8f2b428130649d724b3c4cb858a9c12ee"} Dec 06 10:40:16 crc kubenswrapper[4895]: I1206 10:40:16.453118 4895 generic.go:334] "Generic (PLEG): container finished" podID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerID="3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461" exitCode=0 Dec 06 10:40:16 crc kubenswrapper[4895]: I1206 10:40:16.453197 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb5hw" event={"ID":"69551e4d-b70d-4aa6-9c38-49172a3c0e36","Type":"ContainerDied","Data":"3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461"} Dec 06 10:40:16 crc kubenswrapper[4895]: I1206 10:40:16.456174 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:40:18 crc kubenswrapper[4895]: I1206 10:40:18.475914 4895 generic.go:334] "Generic (PLEG): container finished" podID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerID="679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51" exitCode=0 Dec 06 10:40:18 crc kubenswrapper[4895]: I1206 10:40:18.476139 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb5hw" event={"ID":"69551e4d-b70d-4aa6-9c38-49172a3c0e36","Type":"ContainerDied","Data":"679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51"} Dec 06 10:40:19 crc kubenswrapper[4895]: I1206 10:40:19.490125 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb5hw" event={"ID":"69551e4d-b70d-4aa6-9c38-49172a3c0e36","Type":"ContainerStarted","Data":"c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180"} Dec 06 10:40:19 crc kubenswrapper[4895]: I1206 10:40:19.522766 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mb5hw" podStartSLOduration=4.107370377 podStartE2EDuration="6.522733216s" podCreationTimestamp="2025-12-06 10:40:13 +0000 UTC" firstStartedPulling="2025-12-06 10:40:16.455808683 +0000 UTC m=+13378.857197553" lastFinishedPulling="2025-12-06 10:40:18.871171522 +0000 UTC m=+13381.272560392" observedRunningTime="2025-12-06 10:40:19.516669662 +0000 UTC m=+13381.918058542" watchObservedRunningTime="2025-12-06 10:40:19.522733216 +0000 UTC m=+13381.924122086" Dec 06 10:40:23 crc kubenswrapper[4895]: I1206 10:40:23.554973 4895 scope.go:117] "RemoveContainer" containerID="46aac2fc45bf7d5f1a013b1abc7e98f5699baaefb6848652ba27eafab7135569" Dec 06 10:40:23 crc kubenswrapper[4895]: I1206 10:40:23.983788 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:23 crc kubenswrapper[4895]: I1206 10:40:23.983847 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:24 crc kubenswrapper[4895]: I1206 10:40:24.041021 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:24 crc kubenswrapper[4895]: I1206 10:40:24.642386 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:24 crc kubenswrapper[4895]: I1206 10:40:24.705981 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb5hw"] Dec 06 10:40:26 crc kubenswrapper[4895]: I1206 10:40:26.595044 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mb5hw" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="registry-server" containerID="cri-o://c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180" gracePeriod=2 Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.141869 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.145652 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-catalog-content\") pod \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.145719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqvfv\" (UniqueName: \"kubernetes.io/projected/69551e4d-b70d-4aa6-9c38-49172a3c0e36-kube-api-access-vqvfv\") pod \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.145878 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-utilities\") pod \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\" (UID: \"69551e4d-b70d-4aa6-9c38-49172a3c0e36\") " Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.147436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-utilities" (OuterVolumeSpecName: "utilities") pod "69551e4d-b70d-4aa6-9c38-49172a3c0e36" (UID: "69551e4d-b70d-4aa6-9c38-49172a3c0e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.151595 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69551e4d-b70d-4aa6-9c38-49172a3c0e36-kube-api-access-vqvfv" (OuterVolumeSpecName: "kube-api-access-vqvfv") pod "69551e4d-b70d-4aa6-9c38-49172a3c0e36" (UID: "69551e4d-b70d-4aa6-9c38-49172a3c0e36"). InnerVolumeSpecName "kube-api-access-vqvfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.193050 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69551e4d-b70d-4aa6-9c38-49172a3c0e36" (UID: "69551e4d-b70d-4aa6-9c38-49172a3c0e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.249588 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.250043 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqvfv\" (UniqueName: \"kubernetes.io/projected/69551e4d-b70d-4aa6-9c38-49172a3c0e36-kube-api-access-vqvfv\") on node \"crc\" DevicePath \"\"" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.250241 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69551e4d-b70d-4aa6-9c38-49172a3c0e36-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.608186 4895 generic.go:334] "Generic (PLEG): container finished" podID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerID="c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180" exitCode=0 Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.608239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb5hw" event={"ID":"69551e4d-b70d-4aa6-9c38-49172a3c0e36","Type":"ContainerDied","Data":"c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180"} Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.608294 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb5hw" event={"ID":"69551e4d-b70d-4aa6-9c38-49172a3c0e36","Type":"ContainerDied","Data":"fe0f7c52e48e83c48b004793e9bcc7e8f2b428130649d724b3c4cb858a9c12ee"} Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.608315 4895 scope.go:117] "RemoveContainer" containerID="c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.608340 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mb5hw" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.663861 4895 scope.go:117] "RemoveContainer" containerID="679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.673268 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb5hw"] Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.703037 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb5hw"] Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.716059 4895 scope.go:117] "RemoveContainer" containerID="3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.752495 4895 scope.go:117] "RemoveContainer" containerID="c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180" Dec 06 10:40:27 crc kubenswrapper[4895]: E1206 10:40:27.754755 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180\": container with ID starting with c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180 not found: ID does not exist" containerID="c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.754824 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180"} err="failed to get container status \"c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180\": rpc error: code = NotFound desc = could not find container \"c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180\": container with ID starting with c7e8a6f2fc0f48256cdfd869570465c7231992808173051b6fa68538bf3ca180 not found: ID does not exist" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.754863 4895 scope.go:117] "RemoveContainer" containerID="679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51" Dec 06 10:40:27 crc kubenswrapper[4895]: E1206 10:40:27.755399 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51\": container with ID starting with 679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51 not found: ID does not exist" containerID="679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.755464 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51"} err="failed to get container status \"679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51\": rpc error: code = NotFound desc = could not find container \"679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51\": container with ID starting with 679391809b6851ca1e505930f581df47c4062a696fdd109d88206e0c436ecb51 not found: ID does not exist" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.755521 4895 scope.go:117] "RemoveContainer" containerID="3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461" Dec 06 10:40:27 crc kubenswrapper[4895]: E1206 10:40:27.756064 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461\": container with ID starting with 3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461 not found: ID does not exist" containerID="3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461" Dec 06 10:40:27 crc kubenswrapper[4895]: I1206 10:40:27.756112 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461"} err="failed to get container status \"3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461\": rpc error: code = NotFound desc = could not find container \"3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461\": container with ID starting with 3a4ee7011133b0e43343214711ef061e2151248b6cf31fd59073558c0dfb0461 not found: ID does not exist" Dec 06 10:40:28 crc kubenswrapper[4895]: I1206 10:40:28.069311 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" path="/var/lib/kubelet/pods/69551e4d-b70d-4aa6-9c38-49172a3c0e36/volumes" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.833036 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wzckn"] Dec 06 10:40:58 crc kubenswrapper[4895]: E1206 10:40:58.834688 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="extract-content" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.834710 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="extract-content" Dec 06 10:40:58 crc kubenswrapper[4895]: E1206 10:40:58.834762 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="extract-utilities" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.834775 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="extract-utilities" Dec 06 10:40:58 crc kubenswrapper[4895]: E1206 10:40:58.834795 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="registry-server" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.834806 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="registry-server" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.835153 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="69551e4d-b70d-4aa6-9c38-49172a3c0e36" containerName="registry-server" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.837558 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.863928 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzckn"] Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.930767 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-catalog-content\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.930856 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-utilities\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:58 crc kubenswrapper[4895]: I1206 10:40:58.930960 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952bk\" (UniqueName: \"kubernetes.io/projected/cd9fae02-95a0-41a3-ad09-dd390c133a31-kube-api-access-952bk\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.033112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-catalog-content\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.033174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-utilities\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.033241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-952bk\" (UniqueName: \"kubernetes.io/projected/cd9fae02-95a0-41a3-ad09-dd390c133a31-kube-api-access-952bk\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.033672 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-catalog-content\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.033696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-utilities\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.060564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-952bk\" (UniqueName: \"kubernetes.io/projected/cd9fae02-95a0-41a3-ad09-dd390c133a31-kube-api-access-952bk\") pod \"community-operators-wzckn\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.170125 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:40:59 crc kubenswrapper[4895]: W1206 10:40:59.781006 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd9fae02_95a0_41a3_ad09_dd390c133a31.slice/crio-3088ea9a8f5d4d4c648f53a35835ff32f5595ff12a67b1a5f2226cbb81fa8023 WatchSource:0}: Error finding container 3088ea9a8f5d4d4c648f53a35835ff32f5595ff12a67b1a5f2226cbb81fa8023: Status 404 returned error can't find the container with id 3088ea9a8f5d4d4c648f53a35835ff32f5595ff12a67b1a5f2226cbb81fa8023 Dec 06 10:40:59 crc kubenswrapper[4895]: I1206 10:40:59.792292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzckn"] Dec 06 10:41:00 crc kubenswrapper[4895]: I1206 10:41:00.083979 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerID="892a67d5b7861c7e5a2c11663322a529396414e07c8bb57189f645a15fda0fce" exitCode=0 Dec 06 10:41:00 crc kubenswrapper[4895]: I1206 10:41:00.084284 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerDied","Data":"892a67d5b7861c7e5a2c11663322a529396414e07c8bb57189f645a15fda0fce"} Dec 06 10:41:00 crc kubenswrapper[4895]: I1206 10:41:00.084315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerStarted","Data":"3088ea9a8f5d4d4c648f53a35835ff32f5595ff12a67b1a5f2226cbb81fa8023"} Dec 06 10:41:01 crc kubenswrapper[4895]: I1206 10:41:01.095505 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerStarted","Data":"8b4c3a047a245cc43519cd0dadad37751e74b34e4d55f3d418998c22af34511e"} Dec 06 10:41:02 crc kubenswrapper[4895]: I1206 10:41:02.113899 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerID="8b4c3a047a245cc43519cd0dadad37751e74b34e4d55f3d418998c22af34511e" exitCode=0 Dec 06 10:41:02 crc kubenswrapper[4895]: I1206 10:41:02.114362 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerDied","Data":"8b4c3a047a245cc43519cd0dadad37751e74b34e4d55f3d418998c22af34511e"} Dec 06 10:41:03 crc kubenswrapper[4895]: I1206 10:41:03.125880 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerStarted","Data":"8d2bfbe50222dfa7be5b0af669137a7519f99735f1c6e61490a6b73151976236"} Dec 06 10:41:03 crc kubenswrapper[4895]: I1206 10:41:03.161875 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wzckn" podStartSLOduration=2.68345586 podStartE2EDuration="5.161858332s" podCreationTimestamp="2025-12-06 10:40:58 +0000 UTC" firstStartedPulling="2025-12-06 10:41:00.086121192 +0000 UTC m=+13422.487510062" lastFinishedPulling="2025-12-06 10:41:02.564523654 +0000 UTC m=+13424.965912534" observedRunningTime="2025-12-06 10:41:03.14508707 +0000 UTC m=+13425.546475960" watchObservedRunningTime="2025-12-06 10:41:03.161858332 +0000 UTC m=+13425.563247212" Dec 06 10:41:09 crc kubenswrapper[4895]: I1206 10:41:09.171779 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:41:09 crc kubenswrapper[4895]: I1206 10:41:09.173282 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:41:09 crc kubenswrapper[4895]: I1206 10:41:09.262955 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:41:09 crc kubenswrapper[4895]: I1206 10:41:09.346171 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:41:09 crc kubenswrapper[4895]: I1206 10:41:09.523527 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wzckn"] Dec 06 10:41:11 crc kubenswrapper[4895]: I1206 10:41:11.223451 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wzckn" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="registry-server" containerID="cri-o://8d2bfbe50222dfa7be5b0af669137a7519f99735f1c6e61490a6b73151976236" gracePeriod=2 Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.236421 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerID="8d2bfbe50222dfa7be5b0af669137a7519f99735f1c6e61490a6b73151976236" exitCode=0 Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.236517 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerDied","Data":"8d2bfbe50222dfa7be5b0af669137a7519f99735f1c6e61490a6b73151976236"} Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.236768 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzckn" event={"ID":"cd9fae02-95a0-41a3-ad09-dd390c133a31","Type":"ContainerDied","Data":"3088ea9a8f5d4d4c648f53a35835ff32f5595ff12a67b1a5f2226cbb81fa8023"} Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.236785 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3088ea9a8f5d4d4c648f53a35835ff32f5595ff12a67b1a5f2226cbb81fa8023" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.244911 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.388855 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-952bk\" (UniqueName: \"kubernetes.io/projected/cd9fae02-95a0-41a3-ad09-dd390c133a31-kube-api-access-952bk\") pod \"cd9fae02-95a0-41a3-ad09-dd390c133a31\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.388924 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-catalog-content\") pod \"cd9fae02-95a0-41a3-ad09-dd390c133a31\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.388962 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-utilities\") pod \"cd9fae02-95a0-41a3-ad09-dd390c133a31\" (UID: \"cd9fae02-95a0-41a3-ad09-dd390c133a31\") " Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.390491 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-utilities" (OuterVolumeSpecName: "utilities") pod "cd9fae02-95a0-41a3-ad09-dd390c133a31" (UID: "cd9fae02-95a0-41a3-ad09-dd390c133a31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.394414 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9fae02-95a0-41a3-ad09-dd390c133a31-kube-api-access-952bk" (OuterVolumeSpecName: "kube-api-access-952bk") pod "cd9fae02-95a0-41a3-ad09-dd390c133a31" (UID: "cd9fae02-95a0-41a3-ad09-dd390c133a31"). InnerVolumeSpecName "kube-api-access-952bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.467875 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd9fae02-95a0-41a3-ad09-dd390c133a31" (UID: "cd9fae02-95a0-41a3-ad09-dd390c133a31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.491833 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-952bk\" (UniqueName: \"kubernetes.io/projected/cd9fae02-95a0-41a3-ad09-dd390c133a31-kube-api-access-952bk\") on node \"crc\" DevicePath \"\"" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.491866 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:41:12 crc kubenswrapper[4895]: I1206 10:41:12.491878 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9fae02-95a0-41a3-ad09-dd390c133a31-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:41:13 crc kubenswrapper[4895]: I1206 10:41:13.245934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzckn" Dec 06 10:41:13 crc kubenswrapper[4895]: I1206 10:41:13.293837 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wzckn"] Dec 06 10:41:13 crc kubenswrapper[4895]: I1206 10:41:13.311795 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wzckn"] Dec 06 10:41:14 crc kubenswrapper[4895]: I1206 10:41:14.065848 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" path="/var/lib/kubelet/pods/cd9fae02-95a0-41a3-ad09-dd390c133a31/volumes" Dec 06 10:41:29 crc kubenswrapper[4895]: I1206 10:41:29.696152 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:41:29 crc kubenswrapper[4895]: I1206 10:41:29.696765 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:41:59 crc kubenswrapper[4895]: I1206 10:41:59.695998 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:41:59 crc kubenswrapper[4895]: I1206 10:41:59.699130 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:42:29 crc kubenswrapper[4895]: I1206 10:42:29.696184 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:42:29 crc kubenswrapper[4895]: I1206 10:42:29.696896 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:42:29 crc kubenswrapper[4895]: I1206 10:42:29.696997 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:42:29 crc kubenswrapper[4895]: I1206 10:42:29.697899 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98d264d8c0891dde4f09d3f42c01be9800a1667f6c1d339c1fd1396bc650c968"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:42:29 crc kubenswrapper[4895]: I1206 10:42:29.697970 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://98d264d8c0891dde4f09d3f42c01be9800a1667f6c1d339c1fd1396bc650c968" gracePeriod=600 Dec 06 10:42:30 crc kubenswrapper[4895]: I1206 10:42:30.244862 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="98d264d8c0891dde4f09d3f42c01be9800a1667f6c1d339c1fd1396bc650c968" exitCode=0 Dec 06 10:42:30 crc kubenswrapper[4895]: I1206 10:42:30.245185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"98d264d8c0891dde4f09d3f42c01be9800a1667f6c1d339c1fd1396bc650c968"} Dec 06 10:42:30 crc kubenswrapper[4895]: I1206 10:42:30.245215 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerStarted","Data":"80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377"} Dec 06 10:42:30 crc kubenswrapper[4895]: I1206 10:42:30.245232 4895 scope.go:117] "RemoveContainer" containerID="cd001f00e7a536f96ecf951ca0eceb0a96bb57e2b7a48560d1760574d26cc179" Dec 06 10:43:37 crc kubenswrapper[4895]: I1206 10:43:37.256745 4895 generic.go:334] "Generic (PLEG): container finished" podID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerID="15ce36aa6e851e4f72f93faa37aea3a97837a9a2c4995deae482b38d1e5c7cd6" exitCode=0 Dec 06 10:43:37 crc kubenswrapper[4895]: I1206 10:43:37.257261 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k22ft/must-gather-tjd7k" event={"ID":"381f09e1-8cdc-4779-b21b-3fe3605901a1","Type":"ContainerDied","Data":"15ce36aa6e851e4f72f93faa37aea3a97837a9a2c4995deae482b38d1e5c7cd6"} Dec 06 10:43:37 crc kubenswrapper[4895]: I1206 10:43:37.257893 4895 scope.go:117] "RemoveContainer" containerID="15ce36aa6e851e4f72f93faa37aea3a97837a9a2c4995deae482b38d1e5c7cd6" Dec 06 10:43:37 crc kubenswrapper[4895]: I1206 10:43:37.990759 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k22ft_must-gather-tjd7k_381f09e1-8cdc-4779-b21b-3fe3605901a1/gather/0.log" Dec 06 10:43:49 crc kubenswrapper[4895]: I1206 10:43:49.534058 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k22ft/must-gather-tjd7k"] Dec 06 10:43:49 crc kubenswrapper[4895]: I1206 10:43:49.534906 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-k22ft/must-gather-tjd7k" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="copy" containerID="cri-o://f732d70219dd6f81feb7afa955d8bb575616e58b231bbfb24cf9bf914b66a5ab" gracePeriod=2 Dec 06 10:43:49 crc kubenswrapper[4895]: I1206 10:43:49.551542 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k22ft/must-gather-tjd7k"] Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.394634 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k22ft_must-gather-tjd7k_381f09e1-8cdc-4779-b21b-3fe3605901a1/copy/0.log" Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.395306 4895 generic.go:334] "Generic (PLEG): container finished" podID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerID="f732d70219dd6f81feb7afa955d8bb575616e58b231bbfb24cf9bf914b66a5ab" exitCode=143 Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.581019 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k22ft_must-gather-tjd7k_381f09e1-8cdc-4779-b21b-3fe3605901a1/copy/0.log" Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.581597 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.691682 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/381f09e1-8cdc-4779-b21b-3fe3605901a1-must-gather-output\") pod \"381f09e1-8cdc-4779-b21b-3fe3605901a1\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.692072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgn65\" (UniqueName: \"kubernetes.io/projected/381f09e1-8cdc-4779-b21b-3fe3605901a1-kube-api-access-jgn65\") pod \"381f09e1-8cdc-4779-b21b-3fe3605901a1\" (UID: \"381f09e1-8cdc-4779-b21b-3fe3605901a1\") " Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.698365 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381f09e1-8cdc-4779-b21b-3fe3605901a1-kube-api-access-jgn65" (OuterVolumeSpecName: "kube-api-access-jgn65") pod "381f09e1-8cdc-4779-b21b-3fe3605901a1" (UID: "381f09e1-8cdc-4779-b21b-3fe3605901a1"). InnerVolumeSpecName "kube-api-access-jgn65". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.794547 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgn65\" (UniqueName: \"kubernetes.io/projected/381f09e1-8cdc-4779-b21b-3fe3605901a1-kube-api-access-jgn65\") on node \"crc\" DevicePath \"\"" Dec 06 10:43:50 crc kubenswrapper[4895]: I1206 10:43:50.964962 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/381f09e1-8cdc-4779-b21b-3fe3605901a1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "381f09e1-8cdc-4779-b21b-3fe3605901a1" (UID: "381f09e1-8cdc-4779-b21b-3fe3605901a1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:43:51 crc kubenswrapper[4895]: I1206 10:43:51.000939 4895 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/381f09e1-8cdc-4779-b21b-3fe3605901a1-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 06 10:43:51 crc kubenswrapper[4895]: I1206 10:43:51.411120 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k22ft_must-gather-tjd7k_381f09e1-8cdc-4779-b21b-3fe3605901a1/copy/0.log" Dec 06 10:43:51 crc kubenswrapper[4895]: I1206 10:43:51.412442 4895 scope.go:117] "RemoveContainer" containerID="f732d70219dd6f81feb7afa955d8bb575616e58b231bbfb24cf9bf914b66a5ab" Dec 06 10:43:51 crc kubenswrapper[4895]: I1206 10:43:51.412799 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k22ft/must-gather-tjd7k" Dec 06 10:43:51 crc kubenswrapper[4895]: I1206 10:43:51.436143 4895 scope.go:117] "RemoveContainer" containerID="15ce36aa6e851e4f72f93faa37aea3a97837a9a2c4995deae482b38d1e5c7cd6" Dec 06 10:43:52 crc kubenswrapper[4895]: I1206 10:43:52.064088 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" path="/var/lib/kubelet/pods/381f09e1-8cdc-4779-b21b-3fe3605901a1/volumes" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.866488 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xh6kh"] Dec 06 10:44:18 crc kubenswrapper[4895]: E1206 10:44:18.867515 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="extract-content" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867534 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="extract-content" Dec 06 10:44:18 crc kubenswrapper[4895]: E1206 10:44:18.867556 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="gather" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867563 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="gather" Dec 06 10:44:18 crc kubenswrapper[4895]: E1206 10:44:18.867586 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="extract-utilities" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867593 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="extract-utilities" Dec 06 10:44:18 crc kubenswrapper[4895]: E1206 10:44:18.867614 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="registry-server" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867620 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="registry-server" Dec 06 10:44:18 crc kubenswrapper[4895]: E1206 10:44:18.867637 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="copy" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867642 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="copy" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867906 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="copy" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867926 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="381f09e1-8cdc-4779-b21b-3fe3605901a1" containerName="gather" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.867942 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9fae02-95a0-41a3-ad09-dd390c133a31" containerName="registry-server" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.869531 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.885425 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh6kh"] Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.926457 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nnw\" (UniqueName: \"kubernetes.io/projected/f525c188-f0a5-4735-8f13-8590028c4813-kube-api-access-v6nnw\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.926675 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-catalog-content\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:18 crc kubenswrapper[4895]: I1206 10:44:18.926716 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-utilities\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.028946 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nnw\" (UniqueName: \"kubernetes.io/projected/f525c188-f0a5-4735-8f13-8590028c4813-kube-api-access-v6nnw\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.029045 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-catalog-content\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.029069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-utilities\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.029733 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-catalog-content\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.029766 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-utilities\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.057936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nnw\" (UniqueName: \"kubernetes.io/projected/f525c188-f0a5-4735-8f13-8590028c4813-kube-api-access-v6nnw\") pod \"redhat-operators-xh6kh\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.199960 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:19 crc kubenswrapper[4895]: I1206 10:44:19.752382 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh6kh"] Dec 06 10:44:20 crc kubenswrapper[4895]: I1206 10:44:20.706595 4895 generic.go:334] "Generic (PLEG): container finished" podID="f525c188-f0a5-4735-8f13-8590028c4813" containerID="80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a" exitCode=0 Dec 06 10:44:20 crc kubenswrapper[4895]: I1206 10:44:20.706717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerDied","Data":"80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a"} Dec 06 10:44:20 crc kubenswrapper[4895]: I1206 10:44:20.707053 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerStarted","Data":"e9f2112abeba8e810cdaafa981e3dd1a1d97853bc032a6b2206a909de78c1a0e"} Dec 06 10:44:21 crc kubenswrapper[4895]: I1206 10:44:21.720040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerStarted","Data":"cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f"} Dec 06 10:44:23 crc kubenswrapper[4895]: I1206 10:44:23.740871 4895 generic.go:334] "Generic (PLEG): container finished" podID="f525c188-f0a5-4735-8f13-8590028c4813" containerID="cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f" exitCode=0 Dec 06 10:44:23 crc kubenswrapper[4895]: I1206 10:44:23.740907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerDied","Data":"cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f"} Dec 06 10:44:24 crc kubenswrapper[4895]: I1206 10:44:24.754067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerStarted","Data":"d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14"} Dec 06 10:44:24 crc kubenswrapper[4895]: I1206 10:44:24.780466 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xh6kh" podStartSLOduration=3.353664489 podStartE2EDuration="6.780419268s" podCreationTimestamp="2025-12-06 10:44:18 +0000 UTC" firstStartedPulling="2025-12-06 10:44:20.708838457 +0000 UTC m=+13623.110227327" lastFinishedPulling="2025-12-06 10:44:24.135593236 +0000 UTC m=+13626.536982106" observedRunningTime="2025-12-06 10:44:24.77826987 +0000 UTC m=+13627.179658760" watchObservedRunningTime="2025-12-06 10:44:24.780419268 +0000 UTC m=+13627.181808138" Dec 06 10:44:29 crc kubenswrapper[4895]: I1206 10:44:29.201190 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:29 crc kubenswrapper[4895]: I1206 10:44:29.201964 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:30 crc kubenswrapper[4895]: I1206 10:44:30.248098 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh6kh" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="registry-server" probeResult="failure" output=< Dec 06 10:44:30 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Dec 06 10:44:30 crc kubenswrapper[4895]: > Dec 06 10:44:39 crc kubenswrapper[4895]: I1206 10:44:39.271579 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:39 crc kubenswrapper[4895]: I1206 10:44:39.341895 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:39 crc kubenswrapper[4895]: I1206 10:44:39.517732 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh6kh"] Dec 06 10:44:40 crc kubenswrapper[4895]: I1206 10:44:40.936774 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xh6kh" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="registry-server" containerID="cri-o://d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14" gracePeriod=2 Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.440404 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.628003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6nnw\" (UniqueName: \"kubernetes.io/projected/f525c188-f0a5-4735-8f13-8590028c4813-kube-api-access-v6nnw\") pod \"f525c188-f0a5-4735-8f13-8590028c4813\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.628106 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-utilities\") pod \"f525c188-f0a5-4735-8f13-8590028c4813\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.628168 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-catalog-content\") pod \"f525c188-f0a5-4735-8f13-8590028c4813\" (UID: \"f525c188-f0a5-4735-8f13-8590028c4813\") " Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.629445 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-utilities" (OuterVolumeSpecName: "utilities") pod "f525c188-f0a5-4735-8f13-8590028c4813" (UID: "f525c188-f0a5-4735-8f13-8590028c4813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.634579 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f525c188-f0a5-4735-8f13-8590028c4813-kube-api-access-v6nnw" (OuterVolumeSpecName: "kube-api-access-v6nnw") pod "f525c188-f0a5-4735-8f13-8590028c4813" (UID: "f525c188-f0a5-4735-8f13-8590028c4813"). InnerVolumeSpecName "kube-api-access-v6nnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.730713 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6nnw\" (UniqueName: \"kubernetes.io/projected/f525c188-f0a5-4735-8f13-8590028c4813-kube-api-access-v6nnw\") on node \"crc\" DevicePath \"\"" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.730750 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.747990 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f525c188-f0a5-4735-8f13-8590028c4813" (UID: "f525c188-f0a5-4735-8f13-8590028c4813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.832525 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f525c188-f0a5-4735-8f13-8590028c4813-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.958821 4895 generic.go:334] "Generic (PLEG): container finished" podID="f525c188-f0a5-4735-8f13-8590028c4813" containerID="d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14" exitCode=0 Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.958934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh6kh" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.958938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerDied","Data":"d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14"} Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.961146 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh6kh" event={"ID":"f525c188-f0a5-4735-8f13-8590028c4813","Type":"ContainerDied","Data":"e9f2112abeba8e810cdaafa981e3dd1a1d97853bc032a6b2206a909de78c1a0e"} Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.961177 4895 scope.go:117] "RemoveContainer" containerID="d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14" Dec 06 10:44:41 crc kubenswrapper[4895]: I1206 10:44:41.993894 4895 scope.go:117] "RemoveContainer" containerID="cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.007551 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh6kh"] Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.019798 4895 scope.go:117] "RemoveContainer" containerID="80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.021522 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xh6kh"] Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.066119 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f525c188-f0a5-4735-8f13-8590028c4813" path="/var/lib/kubelet/pods/f525c188-f0a5-4735-8f13-8590028c4813/volumes" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.081188 4895 scope.go:117] "RemoveContainer" containerID="d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14" Dec 06 10:44:42 crc kubenswrapper[4895]: E1206 10:44:42.081692 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14\": container with ID starting with d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14 not found: ID does not exist" containerID="d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.081733 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14"} err="failed to get container status \"d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14\": rpc error: code = NotFound desc = could not find container \"d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14\": container with ID starting with d4aa7c1080b51095ac604c94f00a2f00c6fdbee5a2bc5a17ddd1450dd6e0fd14 not found: ID does not exist" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.081759 4895 scope.go:117] "RemoveContainer" containerID="cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f" Dec 06 10:44:42 crc kubenswrapper[4895]: E1206 10:44:42.082299 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f\": container with ID starting with cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f not found: ID does not exist" containerID="cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.082333 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f"} err="failed to get container status \"cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f\": rpc error: code = NotFound desc = could not find container \"cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f\": container with ID starting with cff708b7205e19cbf19d40b9493d5a3f2e7c46b72be0d9cd88615b2d7acd786f not found: ID does not exist" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.082349 4895 scope.go:117] "RemoveContainer" containerID="80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a" Dec 06 10:44:42 crc kubenswrapper[4895]: E1206 10:44:42.082720 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a\": container with ID starting with 80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a not found: ID does not exist" containerID="80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a" Dec 06 10:44:42 crc kubenswrapper[4895]: I1206 10:44:42.082741 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a"} err="failed to get container status \"80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a\": rpc error: code = NotFound desc = could not find container \"80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a\": container with ID starting with 80969f9ce415a2e5ff5946eea4310928d7dc7aa52383cc2bf83fdf794a94f32a not found: ID does not exist" Dec 06 10:44:59 crc kubenswrapper[4895]: I1206 10:44:59.695304 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:44:59 crc kubenswrapper[4895]: I1206 10:44:59.696613 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.184843 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h"] Dec 06 10:45:00 crc kubenswrapper[4895]: E1206 10:45:00.185428 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="extract-utilities" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.185445 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="extract-utilities" Dec 06 10:45:00 crc kubenswrapper[4895]: E1206 10:45:00.185464 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="registry-server" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.185524 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="registry-server" Dec 06 10:45:00 crc kubenswrapper[4895]: E1206 10:45:00.185571 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="extract-content" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.185581 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="extract-content" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.185870 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f525c188-f0a5-4735-8f13-8590028c4813" containerName="registry-server" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.186837 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.188922 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.190841 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.214020 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h"] Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.233061 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-config-volume\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.233172 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9zbz\" (UniqueName: \"kubernetes.io/projected/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-kube-api-access-w9zbz\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.233244 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-secret-volume\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.335534 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-config-volume\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.336510 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-config-volume\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.336648 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zbz\" (UniqueName: \"kubernetes.io/projected/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-kube-api-access-w9zbz\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.337045 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-secret-volume\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.353212 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-secret-volume\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.365332 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zbz\" (UniqueName: \"kubernetes.io/projected/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-kube-api-access-w9zbz\") pod \"collect-profiles-29416965-qkb4h\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.515979 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:00 crc kubenswrapper[4895]: I1206 10:45:00.976037 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h"] Dec 06 10:45:01 crc kubenswrapper[4895]: I1206 10:45:01.233622 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" event={"ID":"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c","Type":"ContainerStarted","Data":"1eb0d01f6344cbcbbaf33c624a7a4472246a6a79dfea1f4f4937268ad81bfb27"} Dec 06 10:45:01 crc kubenswrapper[4895]: I1206 10:45:01.233891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" event={"ID":"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c","Type":"ContainerStarted","Data":"c10b2213fd00cd203ba8f56d6e0b52cb0bbc97b7cd67d916e719d8259b9675d9"} Dec 06 10:45:01 crc kubenswrapper[4895]: I1206 10:45:01.250546 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" podStartSLOduration=1.250523697 podStartE2EDuration="1.250523697s" podCreationTimestamp="2025-12-06 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 10:45:01.247020313 +0000 UTC m=+13663.648409183" watchObservedRunningTime="2025-12-06 10:45:01.250523697 +0000 UTC m=+13663.651912567" Dec 06 10:45:02 crc kubenswrapper[4895]: I1206 10:45:02.244214 4895 generic.go:334] "Generic (PLEG): container finished" podID="4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" containerID="1eb0d01f6344cbcbbaf33c624a7a4472246a6a79dfea1f4f4937268ad81bfb27" exitCode=0 Dec 06 10:45:02 crc kubenswrapper[4895]: I1206 10:45:02.244321 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" event={"ID":"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c","Type":"ContainerDied","Data":"1eb0d01f6344cbcbbaf33c624a7a4472246a6a79dfea1f4f4937268ad81bfb27"} Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.653100 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.710435 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-secret-volume\") pod \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.710557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9zbz\" (UniqueName: \"kubernetes.io/projected/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-kube-api-access-w9zbz\") pod \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.710645 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-config-volume\") pod \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\" (UID: \"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c\") " Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.711319 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-config-volume" (OuterVolumeSpecName: "config-volume") pod "4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" (UID: "4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.719714 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" (UID: "4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.719849 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-kube-api-access-w9zbz" (OuterVolumeSpecName: "kube-api-access-w9zbz") pod "4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" (UID: "4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c"). InnerVolumeSpecName "kube-api-access-w9zbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.813501 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.813532 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 10:45:03 crc kubenswrapper[4895]: I1206 10:45:03.813543 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9zbz\" (UniqueName: \"kubernetes.io/projected/4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c-kube-api-access-w9zbz\") on node \"crc\" DevicePath \"\"" Dec 06 10:45:04 crc kubenswrapper[4895]: I1206 10:45:04.265646 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" event={"ID":"4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c","Type":"ContainerDied","Data":"c10b2213fd00cd203ba8f56d6e0b52cb0bbc97b7cd67d916e719d8259b9675d9"} Dec 06 10:45:04 crc kubenswrapper[4895]: I1206 10:45:04.265695 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10b2213fd00cd203ba8f56d6e0b52cb0bbc97b7cd67d916e719d8259b9675d9" Dec 06 10:45:04 crc kubenswrapper[4895]: I1206 10:45:04.265746 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416965-qkb4h" Dec 06 10:45:04 crc kubenswrapper[4895]: I1206 10:45:04.317323 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k"] Dec 06 10:45:04 crc kubenswrapper[4895]: I1206 10:45:04.328334 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416920-67p5k"] Dec 06 10:45:06 crc kubenswrapper[4895]: I1206 10:45:06.064264 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85af027c-5f8a-4878-b931-f62903168109" path="/var/lib/kubelet/pods/85af027c-5f8a-4878-b931-f62903168109/volumes" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.008709 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q2n7l"] Dec 06 10:45:17 crc kubenswrapper[4895]: E1206 10:45:17.011467 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" containerName="collect-profiles" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.015428 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" containerName="collect-profiles" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.016122 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbc37eb-cd6c-4c94-bb47-d62e8dcc816c" containerName="collect-profiles" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.018201 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.031599 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2n7l"] Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.087709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffr57\" (UniqueName: \"kubernetes.io/projected/289de089-8289-4f2d-a56f-e70341d523a7-kube-api-access-ffr57\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.087833 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-catalog-content\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.087863 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-utilities\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.190078 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-catalog-content\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.190127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-utilities\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.190262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffr57\" (UniqueName: \"kubernetes.io/projected/289de089-8289-4f2d-a56f-e70341d523a7-kube-api-access-ffr57\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.190764 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-catalog-content\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.190803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-utilities\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.233555 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffr57\" (UniqueName: \"kubernetes.io/projected/289de089-8289-4f2d-a56f-e70341d523a7-kube-api-access-ffr57\") pod \"certified-operators-q2n7l\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.347731 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:17 crc kubenswrapper[4895]: I1206 10:45:17.870499 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2n7l"] Dec 06 10:45:18 crc kubenswrapper[4895]: I1206 10:45:18.413402 4895 generic.go:334] "Generic (PLEG): container finished" podID="289de089-8289-4f2d-a56f-e70341d523a7" containerID="a9f786432fae2e4bd1ac749a9992e79aec63daf795feaec0f5d60b9fa6a0bcd9" exitCode=0 Dec 06 10:45:18 crc kubenswrapper[4895]: I1206 10:45:18.413620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerDied","Data":"a9f786432fae2e4bd1ac749a9992e79aec63daf795feaec0f5d60b9fa6a0bcd9"} Dec 06 10:45:18 crc kubenswrapper[4895]: I1206 10:45:18.413815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerStarted","Data":"b75b02dbf7d4418b83bbeb6b4c50cc31f846a4cdd5114c8e553dd6bbfc79a292"} Dec 06 10:45:18 crc kubenswrapper[4895]: I1206 10:45:18.415321 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 10:45:19 crc kubenswrapper[4895]: I1206 10:45:19.428404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerStarted","Data":"7ae54c1a430d1621523927e74ca1e11f2503fc5bd400fe55c274489aa568a9c2"} Dec 06 10:45:20 crc kubenswrapper[4895]: I1206 10:45:20.441955 4895 generic.go:334] "Generic (PLEG): container finished" podID="289de089-8289-4f2d-a56f-e70341d523a7" containerID="7ae54c1a430d1621523927e74ca1e11f2503fc5bd400fe55c274489aa568a9c2" exitCode=0 Dec 06 10:45:20 crc kubenswrapper[4895]: I1206 10:45:20.442002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerDied","Data":"7ae54c1a430d1621523927e74ca1e11f2503fc5bd400fe55c274489aa568a9c2"} Dec 06 10:45:21 crc kubenswrapper[4895]: I1206 10:45:21.456920 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerStarted","Data":"8794b4bc4b08bc9e544cf70173e5a30f0d19d6c2b7a490b86a3840fdea764bf7"} Dec 06 10:45:21 crc kubenswrapper[4895]: I1206 10:45:21.478374 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q2n7l" podStartSLOduration=2.932235861 podStartE2EDuration="5.478349711s" podCreationTimestamp="2025-12-06 10:45:16 +0000 UTC" firstStartedPulling="2025-12-06 10:45:18.414999964 +0000 UTC m=+13680.816388834" lastFinishedPulling="2025-12-06 10:45:20.961113804 +0000 UTC m=+13683.362502684" observedRunningTime="2025-12-06 10:45:21.474764404 +0000 UTC m=+13683.876153284" watchObservedRunningTime="2025-12-06 10:45:21.478349711 +0000 UTC m=+13683.879738581" Dec 06 10:45:24 crc kubenswrapper[4895]: I1206 10:45:24.084067 4895 scope.go:117] "RemoveContainer" containerID="fc029cd7b992c4ad0600ee532f6a41e4c02d52391c3254c46e013dab981cec3a" Dec 06 10:45:27 crc kubenswrapper[4895]: I1206 10:45:27.348207 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:27 crc kubenswrapper[4895]: I1206 10:45:27.348900 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:27 crc kubenswrapper[4895]: I1206 10:45:27.394878 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:27 crc kubenswrapper[4895]: I1206 10:45:27.605962 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:27 crc kubenswrapper[4895]: I1206 10:45:27.672578 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q2n7l"] Dec 06 10:45:29 crc kubenswrapper[4895]: I1206 10:45:29.572702 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q2n7l" podUID="289de089-8289-4f2d-a56f-e70341d523a7" containerName="registry-server" containerID="cri-o://8794b4bc4b08bc9e544cf70173e5a30f0d19d6c2b7a490b86a3840fdea764bf7" gracePeriod=2 Dec 06 10:45:29 crc kubenswrapper[4895]: I1206 10:45:29.695497 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:45:29 crc kubenswrapper[4895]: I1206 10:45:29.695933 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:45:30 crc kubenswrapper[4895]: I1206 10:45:30.590503 4895 generic.go:334] "Generic (PLEG): container finished" podID="289de089-8289-4f2d-a56f-e70341d523a7" containerID="8794b4bc4b08bc9e544cf70173e5a30f0d19d6c2b7a490b86a3840fdea764bf7" exitCode=0 Dec 06 10:45:30 crc kubenswrapper[4895]: I1206 10:45:30.590593 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerDied","Data":"8794b4bc4b08bc9e544cf70173e5a30f0d19d6c2b7a490b86a3840fdea764bf7"} Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.610970 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.613594 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2n7l" event={"ID":"289de089-8289-4f2d-a56f-e70341d523a7","Type":"ContainerDied","Data":"b75b02dbf7d4418b83bbeb6b4c50cc31f846a4cdd5114c8e553dd6bbfc79a292"} Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.613679 4895 scope.go:117] "RemoveContainer" containerID="8794b4bc4b08bc9e544cf70173e5a30f0d19d6c2b7a490b86a3840fdea764bf7" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.643213 4895 scope.go:117] "RemoveContainer" containerID="7ae54c1a430d1621523927e74ca1e11f2503fc5bd400fe55c274489aa568a9c2" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.678930 4895 scope.go:117] "RemoveContainer" containerID="a9f786432fae2e4bd1ac749a9992e79aec63daf795feaec0f5d60b9fa6a0bcd9" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.721769 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-utilities\") pod \"289de089-8289-4f2d-a56f-e70341d523a7\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.721815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-catalog-content\") pod \"289de089-8289-4f2d-a56f-e70341d523a7\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.722041 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffr57\" (UniqueName: \"kubernetes.io/projected/289de089-8289-4f2d-a56f-e70341d523a7-kube-api-access-ffr57\") pod \"289de089-8289-4f2d-a56f-e70341d523a7\" (UID: \"289de089-8289-4f2d-a56f-e70341d523a7\") " Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.722982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-utilities" (OuterVolumeSpecName: "utilities") pod "289de089-8289-4f2d-a56f-e70341d523a7" (UID: "289de089-8289-4f2d-a56f-e70341d523a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.723434 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.728731 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289de089-8289-4f2d-a56f-e70341d523a7-kube-api-access-ffr57" (OuterVolumeSpecName: "kube-api-access-ffr57") pod "289de089-8289-4f2d-a56f-e70341d523a7" (UID: "289de089-8289-4f2d-a56f-e70341d523a7"). InnerVolumeSpecName "kube-api-access-ffr57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.775398 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "289de089-8289-4f2d-a56f-e70341d523a7" (UID: "289de089-8289-4f2d-a56f-e70341d523a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.825886 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffr57\" (UniqueName: \"kubernetes.io/projected/289de089-8289-4f2d-a56f-e70341d523a7-kube-api-access-ffr57\") on node \"crc\" DevicePath \"\"" Dec 06 10:45:32 crc kubenswrapper[4895]: I1206 10:45:32.825934 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289de089-8289-4f2d-a56f-e70341d523a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 10:45:33 crc kubenswrapper[4895]: I1206 10:45:33.631037 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2n7l" Dec 06 10:45:33 crc kubenswrapper[4895]: I1206 10:45:33.699559 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q2n7l"] Dec 06 10:45:33 crc kubenswrapper[4895]: I1206 10:45:33.704860 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q2n7l"] Dec 06 10:45:34 crc kubenswrapper[4895]: I1206 10:45:34.064428 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289de089-8289-4f2d-a56f-e70341d523a7" path="/var/lib/kubelet/pods/289de089-8289-4f2d-a56f-e70341d523a7/volumes" Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.696368 4895 patch_prober.go:28] interesting pod/machine-config-daemon-6k7r2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.697710 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.697786 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.698910 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377"} pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.699012 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerName="machine-config-daemon" containerID="cri-o://80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" gracePeriod=600 Dec 06 10:45:59 crc kubenswrapper[4895]: E1206 10:45:59.831866 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.989846 4895 generic.go:334] "Generic (PLEG): container finished" podID="9200f6d1-bc88-4065-9985-8c6a6387404f" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" exitCode=0 Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.989893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" event={"ID":"9200f6d1-bc88-4065-9985-8c6a6387404f","Type":"ContainerDied","Data":"80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377"} Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.990286 4895 scope.go:117] "RemoveContainer" containerID="98d264d8c0891dde4f09d3f42c01be9800a1667f6c1d339c1fd1396bc650c968" Dec 06 10:45:59 crc kubenswrapper[4895]: I1206 10:45:59.991337 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:45:59 crc kubenswrapper[4895]: E1206 10:45:59.991689 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:46:13 crc kubenswrapper[4895]: I1206 10:46:13.050266 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:46:13 crc kubenswrapper[4895]: E1206 10:46:13.051149 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:46:25 crc kubenswrapper[4895]: I1206 10:46:25.051031 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:46:25 crc kubenswrapper[4895]: E1206 10:46:25.051734 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:46:36 crc kubenswrapper[4895]: I1206 10:46:36.050338 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:46:36 crc kubenswrapper[4895]: E1206 10:46:36.051980 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:46:50 crc kubenswrapper[4895]: I1206 10:46:50.051641 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:46:50 crc kubenswrapper[4895]: E1206 10:46:50.052693 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:47:02 crc kubenswrapper[4895]: I1206 10:47:02.055641 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:47:02 crc kubenswrapper[4895]: E1206 10:47:02.056535 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:47:13 crc kubenswrapper[4895]: I1206 10:47:13.050773 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:47:13 crc kubenswrapper[4895]: E1206 10:47:13.051862 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:47:24 crc kubenswrapper[4895]: I1206 10:47:24.214107 4895 scope.go:117] "RemoveContainer" containerID="8b4c3a047a245cc43519cd0dadad37751e74b34e4d55f3d418998c22af34511e" Dec 06 10:47:24 crc kubenswrapper[4895]: I1206 10:47:24.256353 4895 scope.go:117] "RemoveContainer" containerID="892a67d5b7861c7e5a2c11663322a529396414e07c8bb57189f645a15fda0fce" Dec 06 10:47:24 crc kubenswrapper[4895]: I1206 10:47:24.313739 4895 scope.go:117] "RemoveContainer" containerID="8d2bfbe50222dfa7be5b0af669137a7519f99735f1c6e61490a6b73151976236" Dec 06 10:47:28 crc kubenswrapper[4895]: I1206 10:47:28.070590 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:47:28 crc kubenswrapper[4895]: E1206 10:47:28.071807 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f" Dec 06 10:47:39 crc kubenswrapper[4895]: I1206 10:47:39.050281 4895 scope.go:117] "RemoveContainer" containerID="80dec7f6720e1fd4be065c40b10db8a33cdd33e420485c0374586f79e7a64377" Dec 06 10:47:39 crc kubenswrapper[4895]: E1206 10:47:39.051974 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6k7r2_openshift-machine-config-operator(9200f6d1-bc88-4065-9985-8c6a6387404f)\"" pod="openshift-machine-config-operator/machine-config-daemon-6k7r2" podUID="9200f6d1-bc88-4065-9985-8c6a6387404f"